How OpenAI and the Associated Press are Collaborating to Shape the Future of Machine Learning and Digital Journalism


OpenAI strikes deal with AP to pay for using its news in training AI

OpenAI, the brain behind ChatGPT, is shelling out dough to use Associated Press (AP) news articles for training its AI. This marks the first big agreement in a hot debate on whether tech giants should pay creators for the content they pull off the net for AI development.

OpenAI will have the keys to AP’s text story vault dating back to 1985. Plus, AP gets to play around with OpenAI’s tech, testing how it might upgrade their journalism game.

AP has been using automation for local sports and financial reports for years, but it doesn’t use AI to write stories.

A surge of pushback from writers, musicians, news outlets, and social media platforms is happening. They claim this use of their work to train AI is a game changer, as some AI tools already replace human jobs. Recent waves of lawsuits against OpenAI and Google are making waves, alleging wrongful data use.

There’s chat about tech companies and creators hammering out more deals like the AP-OpenAI one to create a “clean database”. But with the truckloads of data needed to train these models, getting enough people on board could be tough.


Google’s Bard AI chatbot has learned to talk

Google’s chatbot, Bard, just got an upgrade. The latest updates include more language skills, better response controls, and a new spoken word feature. Basically, this bot can now yak in nearly 50 languages including Arabic, Chinese, German, Hindi, and Spanish. Also, it’s accessible from more corners of the globe, like Brazil and Europe.

The neat part? Bard now talks out loud. This will come in handy if you’re trying to nail down the correct pronunciation in one of those new languages. Plus, you can now tweak how Bard chats with you by choosing from five options: simple, long, short, professional, or casual. This feature is only in English right now, but they’re working on rolling it out for the other languages.

Bard also got an upgrade in the ‘seeing’ department. Now, it can understand pictures you drop in the chat and provide more info about the image or even come up with captions. For now, this is an English-only deal too.

Sharing Bard’s wisdom also got a whole lot easier. Users can now export Python code that Bard spits out to Replit, besides Colab. Plus, you can copy and share chat bits with others. They’re also making it easier to keep track of old chats with pinned conversations and the option to rename them. So, if you’re a fan of organized chit-chat, you’re in luck.


Stability AI releases Stable Doodle, a sketch-to-image tool

AI whizzkids at Stability AI have whipped up Stable Doodle, a slick new tool that takes your doodles and spits out pretty pictures. They got this tool as part of a shopping spree back in March when they bought Init ML, another AI company started by some old Google hands. This ain’t your regular sketch-to-image tool – it’s got a unique feature that gives you more control over how your final image looks.

Stable Doodle is powered by Stable Diffusion XL, a fancy engine that works with a tech solution called T2I-Adapter from Tencent’s R&D folks. This combo helps Stable Doodle understand your doodles and create images based on them. Unfortunately, I didn’t get a chance to test this baby out before it hit the market. But the samples I saw made my scribbles look like a child’s finger painting.

You can give Stable Doodle some extra directions to help it understand what you want, like “Draw me a comfy chair in an ‘isometric’ style” or “A cat rocking a jeans jacket in a ‘digital art’ style”. At launch, though, you’ve got only 14 art styles to choose from.

Besides drawing your wildest ideas, you can also use Stable Doodle to whip up designs for clients, create killer presentations, or even sketch out logos. In the future, they’re planning to add more practical uses, like sketches for real estate.


Bluehost Unveils AI-Powered WordPress Platform

Bluehost‘s hittin’ the scene with a new product called WonderSuite. It’s like a trusty sidekick that helps folks whip up a website or online store using WordPress, with none of the headaches.

It’s all about simplifying things. Here’s the President of Newfold Digital, Bluehost’s parent company, Ed Jay:

“With WonderSuite, we’re making the building process of a website or store simple, easy, and fast for our customers. They can get their site out there and start seeing results for their business faster.”

WonderSuite has six features that hold your hand from start to finish:

Onboarding: You answer some questions, and the system picks out what you need to start building your website.

Theme: You start off with a basic design and some patterns that you’ll use in the next step.

WordPress Blocks: These are like the Lego bricks of your website. You can personalize your site depending on what you need it for.

WonderHelp AI Guidance: This is a helper module that guides you through each step. It’s like having a built-in, smart tutor.

WonderCart: This is for the folks looking to sell stuff online. It has features that help with promotions and sales.

AI-Powered Content Generation: This helps you create the words for your website, like product descriptions. This feature will be available later in 2023.

Basically, WonderSuite is about making WordPress easier for everyone. It allows anyone to create a snazzy ecommerce site using WordPress, one of the most trusted website publishing platforms out there.


Microsoft tests an AI hub for the Windows 11 app store

Microsoft’s cooking up a new AI hub for its Windows 11 App Store, currently letting Insiders have a sneak peek in its Preview Build. This hub is a one-stop-shop for AI-based apps from both Microsoft and third-party devs. Also, they’ve got plans to introduce AI-created summaries for app reviews, although that’s still on the back burner for now.

The hub’s got its own special spot on the left-hand menu, right under the Movies & TV tab. It’s tricky to figure out exactly what apps we’ll find there without access to the preview build, but it seems like AI photo editing tools like Luminar Neo might be in the mix.

The Microsoft Store’s also getting a price tracker feature, showing the biggest price drops for an app in the past 30 days. Handy for figuring out whether to snap up an app now or hold out for a better deal. Other updates on the horizon are 3D emojis, promised a couple of years back, and a bug fix for Zune players.


Kakao ups its game in generative AI with Karlo 2.0, an AI image generator

Kakao, the big tech player from South Korea, is getting its game face on in the world of artificial intelligence (AI). They’ve cranked up their AI image generator tool, Karlo, to version 2.0. This tool works like magic, making images from just a few words you type in, and it’s available in English and Korean.

They’re also jazzing up their language model, KoGPT, with a 2.0 version set to roll out later in the year. Plus, they’re setting up a fund worth $7.7 million to support fresh-faced startups working on image generation tech.

Now, this ain’t Kakao’s first rodeo with AI. They’re speeding things up and making upgrades faster than ever, likely feeling the heat from new kids on the block like OpenAI. Their latest tool, Karlo 2.0, can churn out clearer, more diverse images compared to the old version, and they’ve made it easier for developers to use.

They’re also branching out into healthcare. They’re working on AI that can analyze medical images and draft up initial diagnoses. The goal is to give doctors a helping hand and speed up the process.


Vendict emerges with $9.5M in funding to automate security compliance with generative AI

A company called Vendict just stepped into the spotlight with a cool $9.5 million in funding. Their goal? To make life easier for businesses by tackling the headache of security compliance. You know, that super tedious process where companies gotta prove they’re following the rules before they can work with a customer.

So, how’s Vendict planning to do this? By using some snazzy artificial intelligence (AI) tech to auto-fill those mind-numbing questionnaires. They’re saying it could save a truckload of work hours each month and make the whole selling process quicker.

Now, the brains behind Vendict, Udi Cohen and Michael Keslassy, have come up with an AI system that’s got a handle on all that security jargon. In layman’s terms, they’ve made a machine that understands and can talk the talk of security rules and regulations.

How’s it work? Well, Vendict’s got its own language model that’s been trained in security compliance, and they’ve mixed that with other top-notch models, including one from Microsoft Azure.Vendict’s tech not only helps manage risks within a company, but also does internal audits, keeps track of regulations, and provides a one-stop shop for all compliance documents.


Google’s ChatGPT rival is trained by workers who are under pressure to audit AI answers in as little as 3 minutes, documents show

Google’s chatbot Bard is trained under intense conditions, says Bloomberg. Thousands of contractors, paid as low as $14 an hour, have to check the bot’s answers in a quick-as-a-flash three minutes. They’re under the gun, getting little training for this hot potato task. The spotlight’s on Google, who’s scrambling to compete with OpenAI’s ChatGPT, a newcomer that’s shaken things up by drawing in 100 million users within two months.

Humans have a big part in making sure chatbot answers are on the nose, but for Bard, this is getting to be a tall order with the ever-growing and tricky workload. Contractors express fear and stress due to this sweatshop environment.

The contractors’ gig also includes rating chatbot responses based on how helpful and fresh the information is. Google claims that their focus is on high-quality information, saying that their system doesn’t only rely on these ratings but also on a mix of expertise from across Google. The folks at Appen and Accenture, the companies where the contractors are from, haven’t chirped up about it yet.


Prolific raises $32M to train and stress-test AI models using its network of 120K people

Alrighty then, here’s the skinny. Prolific, a London-based tech startup, has scored a cool $32 million bucks in funding. What’s their gig? Well, they’ve rounded up a posse of 120,000 folks who help check out and test AI systems. This helps ensure these AIs are up to snuff and don’t go off the rails.

Who put up the dough? Partech and Oxford Science Enterprises co-led the investment. Seems Prolific has been on the up-and-up since 2014 and they already have a bunch of high-flyers like Google, Stanford, and the University of Oxford using their services.

This latest cash infusion is going to help Prolific expand its operations, but the founder, Phelim Bradley, says they’re not planning to branch out beyond AI. They’re all about giving AIs a good grilling and ensuring they work as they should. And it seems to be working – so far they’ve paid out $100 million to their team of human testers.

In a world that’s all about the rise of the machines, Prolific is keeping it human. Their secret sauce? Using real people to offer up honest, reliable data and test these AIs to the max. They’ve even built some nifty tools to make sure they’re testing the right stuff.


Google hit with lawsuit alleging it stole data from millions of users to train its AI tools

Google’s found itself in hot water. A lawsuit’s claiming that the big G swiped data from millions of folks to train its AI, all without asking permission. Google and its parent, Alphabet, along with its AI offshoot DeepMind, are being sued by the Clarkson Law Firm, which ain’t their first rodeo – they filed a similar suit against OpenAI not too long ago.

The claim is Google’s been secretly nabbing all sorts of stuff folks have put online and then using it to train its AI tools, like their chatbot Bard. Google’s lawyer, Halimah DeLaine Prado, says it’s a bunch of baloney. She says Google uses info from public sources, like the open web, to train their AI and that’s all fair game.

Google’s been upfront about using public info in its AI training. An update to their privacy policy made that pretty clear. But this suit’s part of a bigger picture: as AI tools get more popular, companies like Google are catching flak over copyright issues and how they use personal info.

The suit’s looking to put a freeze on Google’s AI work and wants damages for those whose data was allegedly taken without asking. The law firm’s got eight plaintiffs lined up, including a kid.