AI-Crafted Overviews, Detailed How-Tos, and Code Snippets for Accelerated Learning and Productivity.


Learn as you search (and browse) using generative AI

Starting now, SGE’s throwing in some rad features to make understanding and fixing code a breeze. You know those AI-driven overviews that help you with different programming languages? Now, they’re gonna highlight bits of code in different colors to make things like keywords and comments pop. Imagine a disco but for code!

But that’s not all. Ever been down a rabbit hole trying to understand a new topic online and getting lost in walls of text? Google’s stepping in with an “SGE while browsing” feature. It’s already in action on your Google app (Android or iOS) and will soon hit Chrome desktop. Here’s the deal: this feature will break down long articles into the main goodies for you. So, instead of skimming, you can tap and get straight to the good stuff. It even points out key questions the article answers, so you can jump right to the knowledge bombs.


Using GPT-4 for content moderation

GPT-4 is a game changer for content moderation. Traditional content moderation is a slow and taxing process, often burdening human moderators. But by using GPT-4, we can develop and update content policies faster, in hours instead of months. Plus, it allows for more consistent labeling of content. 

GPT-4 takes the heavy lifting off human moderators by understanding and applying rules in long policy documents, and quickly adapting to policy changes. Here’s how it works: Policy experts write the rules and label a small set of examples based on those rules. GPT-4 then reads the rules and labels the same examples, without peeking at the answers. Any differences between the human and AI labels are discussed, and the rules are clarified if needed. 

This back-and-forth continues until everyone’s happy with the policy quality. Then the refined policies are used to moderate content on a large scale. And the best part? Anyone with access to OpenAI’s API can use this approach to build their own AI-powered moderation system.


Google Photos adds a scrapbook-like Memories view feature aided by AI

Google Photos is rolling out a cool “Memories view” feature in the U.S. It’s like a digital scrapbook where you can save and showcase special moments. Though they’ve had a Memories feature for about four years (which half a billion folks use every month), this new update lets you get craftier. 

You can pick and choose photos, name the memory however you like, or let their AI suggest titles. And if you’re thinking “I want my crew in on this,” you can. Friends and family can chip in with their own pics and videos. Once you’ve got it all jazzed up, you can share it on social media and even turn it into a video. For now, this feature’s a U.S. exclusive, but it’ll go worldwide soon.


Microsoft Azure ChatGPT allows enterprises to run ChatGPT within their network

Microsoft’s Azure ChatGPT lets businesses use ChatGPT on their own devices, making tasks like fixing code easier. They dropped this tool on GitHub, making it free for all, but with special hosting if you’re using Azure. So, if you’re already on the Azure train, hopping on this shouldn’t be a big deal. 

ChatGPT’s been blowing up, with lots of businesses using it to get stuff done faster. The Azure version is like having a private ChatGPT all for yourself. Benefits? It keeps your data private, gives you full control, and you can even mix it up with your own data or add-ons. 


WhatsApp starts testing AI-generated stickers

WhatsApp’s cooking up some cool AI-generated sticker features in their latest Android beta. This means you type a description, and BOOM, you get a sticker. It’s still up in the air which AI model they’re using, but word on the street (thanks to WABetaInfo) is that Meta’s lending them some secure tech for this. 

If you’ve heard of Midjoureny or OpenAI’s DALL-E models that whip up images from text, this works kinda the same. While you can report bad stickers, we don’t know how WhatsApp’s guarding against the dodgy stuff. These AI-made stickers will have something on ’em to let you know they’re AI-generated, kinda like Bing does with its images. Oh, and Instagram’s also thinking about tagging AI-created content.


Dialpad launches generative AI trained on 5 years of proprietary conversational data

Dialpad, a company from San Francisco, just dropped a new AI tool called DialpadGPT. It’s designed to help out with customer service, sales, and hiring tasks. Think of it as a tool that can quickly sum up what’s been talked about in these areas. They spent five years working on it using loads of chat data, and the idea is to put it in their video meeting and call platform.

Now, other chat AIs out there, like ChatGPT, are cool and all, but they’re general. DialpadGPT is special because it’s made for business talk. Craig Walker, the head honcho at Dialpad, said that to make a tool this tailored, you need to have deep control of the data and tech. Some big names, like Marc Andreessen, are backing the company, and they believe this tool will be a game-changer for businesses.


Voiceflow, a platform for building conversational AI experiences, raises $15M

Voiceflow, a platform for creating conversational AI systems, just raked in $15 million in a funding round led by OpenView, bringing their total funding up to $35 million and putting the company’s value at a solid $105 million.

Braden Ream, CEO of Voiceflow, says they’re using the funds to up their game in product innovation. They’ll be adding a new tool for users to build and deploy AI customer support agents, using what’s called large language models. Voiceflow aims to be the ultimate collaborative platform for building these AI agents. They’re all about automating customer support for websites and apps.


Browse AI helps companies build bots to scrape website data and put it to work

Browse AI, a startup, made that way easier with an automated tool that takes data from websites and puts it straight into spreadsheets or APIs. They just got $2.8 million in seed funding.

Browse AI’s CEO, Ardy Naghshineh, wanted to level the playing field by making web data more accessible, especially for small businesses. His tool is a SaaS app that trains a bot to grab specific data from the web. They’re only going after public info, like property prices or e-commerce product rates.

Browse AI is doing well with the cash they’ve got. They’ve only used half of their initial $400,000 investment. They’ve got 18 folks working there now, but they’re hoping to bump that up to 50 in the next year. They’re setting up shop in Vancouver and will mix in-person and remote work. Naghshineh, who moved to Canada from another country, is making sure his team has all kinds of people. He thinks it’s not just the right thing to do but also good for business.


The New York Times prohibits using its content to train AI models

The New York Times (NYT) isn’t keen on folks using its content to train AI. They recently updated their Terms of Service (T&Cs), as spotted by Adweek, to say that you can’t use any of their content, including text, pics, videos, and even the overall vibe, to develop software, especially if it involves training AI. 

This move by the NYT might be a reaction to Google’s recent privacy policy update. Google admitted they might use data from the web to train their AI tools, like Bard or Cloud AI. A lot of AI, like OpenAI’s ChatGPT, is trained using big chunks of data, and some of that data might be copyrighted or protected, and grabbed without asking first.

OpenAI now lets website owners stop its GPTBot from taking content, and Microsoft put in new rules against using its AI products to make or upgrade other AI services. Plus, Microsoft users can’t grab data from its AI tools.


Computer Server Demand Is Fading. AI May Be Why.

Demand for old-school computer servers is dropping like a rock. Why? People are throwing their money at artificial intelligence (AI) stuff now. Mehdi Hosseini, a big-shot analyst from Susquehanna, said server sales ain’t doing too hot. He thought they’d be up by 10% this quarter, but they’re only climbing 3%. Meanwhile, AI server sales are booming, up 23% from last year.

Companies like Dell and HP, which sell a ton of servers, might feel the pinch. Everyone’s talking about how AI is the new big thing. Mark Liu from Taiwan Semiconductor even said AI’s stealing the limelight from traditional server chips. Another group, Piper Sandler, did a survey and found out AI’s the top tech trend on everyone’s mind for the next few years. 


AI isn’t great at decoding human emotions. So why are regulators targeting the tech?

Emotion recognition AI is about machines trying to figure out how you’re feeling by looking at your face, voice, or the way you move. Think of it like trying to guess if someone’s laughing or crying. But many folks say it’s a load of baloney and might even be a dangerous idea.

The legend Charles Darwin once wondered about human emotions and how universal they really were. Now, some folks in Silicon Valley are trying to figure it out with AI. But just because a computer thinks you’re happy when you’re laughing doesn’t mean it gets the whole picture. Emotions are tricky, and even we humans mess up reading them.

There are some companies selling this tech for stuff like checking if a driver is sleepy or if someone liked a movie trailer. But there are also some worrying uses. Like some places are using it to guess if someone’s lying or even for surveillance. 

This debate makes us question how far we should trust AI. While it’s all flashy and futuristic, not everything can be boiled down to a simple math problem.


How FraudGPT presages the future of weaponized AI

FraudGPT is a new AI tool found on the dark web that makes it easier for inexperienced attackers to launch cyberattacks. For a fee, users get a crash course in cyberattacks, including writing phishing emails, creating malware, and finding vulnerabilities. Despite being basic compared to the tools used by nation-state attack groups, it is democratizing access to weaponized AI, potentially creating a surge in cyberattacks from novice attackers. 

Big cybersecurity vendors are sounding the alarm on this trend and pushing for better AI-powered defenses. AI in cyberattacks is still in the early stages, but it’s already automating social engineering, generating malware, and finding cybercrime resources faster than before. 

The biggest threat is how quickly it can expand the global base of attackers, putting soft targets in education, health care, and government at risk. Cybersecurity teams need to stay ahead in this new AI arms race, considering how AI and ML can help detect subtle indicators of AI-driven attacks, even if they appear legitimate.