Peek under ChatGPT’s Hood: Workspace, Profiles, File Uploads Unleashed! – An eagle-eyed Reddit user discovered the latest ChatGPT business features.
New ChatGPT features are on the way: Workspace, File Uploads, Profiles
OpenAI’s up to their old tricks with some snazzy new features. One eagle-eyed Reddit user, decided to peek under the hood of ChatGPT’s source code. You might say he waltzed into the place like he owned it and, voila, stumbled upon an interface that could very well be an early version of ChatGPT’s new ‘business’ look.
This business edition might include a ‘workspace’, a profile, and the ability to upload files. Fancy that, your chatty AI pal can now remember your deets and chew on some text documents!
This all lines up nicely with OpenAI’s promise of a business model for ChatGPT, back in April 2023, amidst a flurry of new privacy upgrades. The business edition, it seems, is going for a no-peeping-tom approach, promising not to train its models with your personal data. Plus, unlike the Pro version, it’ll remember your chats.
There’s some intrigue about shared workspaces and pre-prompting profiles too. Picture this, you share a workspace with your pals, and presto! You’re a collaborating team. The pre-prompting profile is like your mini-introduction for ChatGPT, saving you from sounding like a broken record introducing yourself every time you chat.
Meta open sources an AI-powered music generator
Meta just dropped its own AI music whiz, MusicGen, into the open-source world, thumbing its nose at Google’s closed-door operation. MusicGen whips up a quick 12-second tune from any text you throw at it. Want “An ’80s driving pop song with heavy drums and synth pads in the background”? You got it. It’ll even attempt to mimic a melody if you give it a sample track.
MusicGen cut its teeth on a 20,000-hour musical marathon, chowing down on thousands of licensed tracks and instrument-only ditties from Shutterstock and Pond5. They’re keeping their training secrets under lock and key, but the pre-trained models are up for grabs for anyone with a beefy enough computer.
While MusicGen can’t knock your socks off, it’s no slouch. It cranks out decent tunes that can go toe-to-toe with Google’s MusicLM. It’s not sweeping the Grammys anytime soon, but it’s got some chops. You can compare the two by listening to their versions of “jazzy elevator music” or a more challenging “Lo-fi slow BPM electro chill with organic samples.”
Meta’s playing it cool, claiming all their training tracks are legit and they’re not about to put any restrictions on MusicGen. They’ve got their ducks in a row with a deal from Shutterstock. But the jury is still out, so let’s see where the music stops.
Salesforce launches AI Cloud to bring models to the enterprise
Salesforce, the heavyweight in the customer relationship management world, is stepping up its game with a shiny new toy called AI Cloud. This isn’t just your regular old cloud though; it’s packed full of artificial intelligence (AI) goodies. You could say it’s a cloud on a smart diet.
Nine AI models aimed at juicing up Salesforce’s mainstay products. Got a sales pitch to send? Sales GPT will write it for you. Need a work order whipped up? Service GPT’s got your back. And that’s just two of ’em.
Borrowing a page from Amazon’s playbook, Salesforce’s AI Cloud plays nice with other AI models, too. So, if you’re already dancing with Amazon Web Services or OpenAI, don’t worry, you can keep the party going.
However, not everything’s rainbows and unicorns in the AI world. The spiffy new tech has a dark side. A couple bigwig companies have thrown shade on generative AI tools, crying foul over privacy concerns. Salesforce is countering this by introducing Einstein Trust Layer – a bouncer, if you will, that kicks out any sensitive info before it gets to the AI model. It even shows toxic behavior, like discrimination, the door.
TikTok launches new AI ad script generator
TikTok’s whipping up a fresh tool for its ad wizards. This new toy is called the Script Generator, and it’s up for grabs to all the TikTok Creative Center folks who’ve got a TikTok for Business account handy on their computers.
In case you’re wondering, this thing’s a walk in the park. Drop in some details about your stuff—like what it’s called, what it does, the words folks might use to find it—and pick how long you want your ad to be. Hit the button, and bam! Instant video script, complete with fancy words to snag folks’ attention, scene suggestions, and all the bells and whistles.
But there’s a wrinkle. This ain’t no golden goose. The ads might come off as generic since the AI’s just copying current trends. TikTok itself gives a heads up, saying don’t put all your eggs in the AI basket ’cause there might be a few hiccups.
Snorkel AI looks beyond data labeling for generative AI
Snorkel AI is stepping up its game, switching from data labeling to helping organizations prep data for generative AI. Data labeling, essentially putting tags on your info, was all the rage, but AI is evolving. Snorkel AI introduced GenFlow, a shiny new service, and Snorkel Foundry to help create custom language models.
They’ve been cooking up a data platform to help with the “data-y” side of AI. Last November, they added a few bells and whistles to their Snorkel Flow tech, speeding up the time-consuming process of data labeling. Their trick? Using large language models to get a head start.
Now, they’re diving deeper with GenFlow, which helps build generative AI apps, and Snorkel Foundry, which helps create those custom language models. CEO Alex Ratner points out that you can’t just feed AI any old data and expect something good to come out. It’s not a magic beanstalk.
A problem with generalized generative AI tools is “hallucination”, where the AI spits out inaccurate responses. Ratner says it’s an error from not training the model for a specific task or not giving it all the right info. Snorkel is trying to solve this with Snorkel Foundry, which curates data.
OpenAI, DeepMind and Anthropic to give UK early access to foundational models for AI safety research
The UK government is all-in on AI safety, seemingly after a nudge from some big tech voices whispering words like “existential threat” and “extinction-level risk” – sure to keep any prime minister up at night. Prime Minister Rishi Sunak announced at London Tech Week that the UK will be getting early peeks at AI models from the big names: OpenAI, DeepMind, and Anthropic, to support safety research. This all comes with a £100 million check to the newly formed AI safety taskforce. Sounds like an ambitious effort to put the UK at the head of the AI safety table.
Sunak, is now dreaming of the UK housing a global AI safety watchdog. He’s even planning a global AI safety summit, a sort of COP26 but for robots. This sea change has come pretty quick, considering in March the government was more about flexing and innovating with AI than leashing it.
But not everyone is thrilled about the tech giants’ involvement. Some worry they might influence the conversation and, by extension, the eventual rules that apply to their businesses. Another concern is that the hype around “superintelligent” AIs might drown out discussions about real-world harm that’s happening now, like biased algorithms and privacy issues.
Salesforce pledges to invest $500M in generative AI startups
Salesforce is doubling down on its bet on AI, y’all. They’ve taken their Generative AI Fund, a little nest egg set aside for startups whipping up smart AI solutions, and blown it up from $250 million to a hefty $500 million. This ain’t chump change. In plain talk, this means more moolah for budding tech geniuses.
What sets Salesforce’s fund apart from other tech investors? Well, they’re putting a premium on ethical AI. Take Tribble, an automation platform they’ve backed, which has hooked up with Private AI to keep users’ private data, well, private.
The tech these companies are cooking up is as diverse as a potluck. You.com is an AI-powered search engine with art and text tools, while Humane is creating a wearable, screen-less AI assistant.
At the same time, Salesforce is unveiling its AI for Impact Accelerator, a program aimed at funneling $2 million to education, workforce, and climate organizations to promote fair and ethical AI use. It’s a bit like Robin Hood, but instead of robbing the rich, they’re just opening their own deep pockets.