DeepMind’s Bold Prediction on AI’s Million Dollar Entrepreneurial Journey in Two Years
The Modern Turing Test 🤯 Can AI make $1 million? Mustafa Suleyman of DeepMind, says “within 2 years”
Co-Founder of DeepMind proposes a “Modern Turing Test” that would task and AI to build a business that made $1,000,000 on auto-pilot.
He believes this can be done within the next 2 years.
Here’s a video that goes over what that would entail and why it may not be as crazy as it sounds.
Apple is testing a ChatGPT-like AI chatbot
Apple is working on its own artificial intelligence (AI) tools, giving a nudge to big shots like OpenAI and Google. They’ve cooked up a chatty bot, nicknamed “Apple GPT”, but haven’t figured out yet how to get it in the hands of everyday folks. They’re hoping to make a big splash with an AI reveal next year.
Apple’s got this new setup, codename “Ajax”, for creating huge language models, the kind of systems that power OpenAI’s ChatGPT and Google’s Bard. They’ve built Ajax using Google tech and it even runs on Google Cloud.
Apple put the brakes on rolling out this chatbot to their own employees because of worries about AI security. But now, more Apple workers are getting to use it, though only with special approval. It’s being used in-house to help with new product designs, but any chatbot output can’t be used for customer features.
Right now, Apple’s chatbot is pretty similar to what’s already out there, like Bard, ChatGPT, and Bing AI. It doesn’t bring anything new to the party.
Apple has been on the hunt for AI experts, particularly folks who understand this generative AI stuff and big language models.
Google Tests AI Tool That Is Able To Write News Articles
Google’s got a new tool cooking in their kitchen called Genesis, an artificial intelligence (AI) tech designed to help write news articles. They’ve been showing it off to heavy hitters in the news game like The New York Times, The Washington Post, and News Corp, the folks who own The Wall Street Journal.
Genesis, a clever helper, can take current events info and turn it into news articles. Google sees this as a wingman for journalists, taking care of the grunt work and freeing up time for bigger tasks. They’re pitching it as a way to dodge the potholes of AI in publishing.
However, some big wigs found Google’s pitch a bit unnerving. They feel it undervalues the sweat and skill needed to craft accurate, well-written news. But Google says these AI tools aren’t meant to replace journalists, but rather lend a hand with stuff like headlines and different writing styles.
Meanwhile, news organizations worldwide are wrestling with whether or not to let AI tools into their newsrooms. AI could change the game, letting users whip up articles on a grander scale. But without careful editing and fact-checking, there’s a risk of spreading fake news and messing up the reputation of traditional news stories.
Meta, Microsoft Team Up to Offer New AI Software for Businesses
Microsoft and Meta are joining forces to offer a new artificial intelligence (AI) language model called Llama 2. This software will be a free tool for folks building software on Microsoft’s Azure cloud-computing platform. Before, Meta had only given Llama to the brainiacs in academia.
Microsoft also plans to peddle a new AI-powered assistant for their workplace software, Microsoft 365, for $30 a month per person. That’s double the cost of their cheapest productivity software. It’s a clear sign that they’re betting big on AI. This news sent Microsoft shares flying 4% on Tuesday to a record $359.49, while Meta’s stock price took a slight dip.
Unlike the previous version, Llama 2 is being released as “open source” software. This means it’s free for everyone to use, change, and share. This move might stoke some rivalry with private, commercial models like GPT-4 by OpenAI, which also powers popular chatbot ChatGPT.
Microsoft’s new pricing for Copilot, its own AI assistant using OpenAI’s tech, is part of a plan to reframe its software offerings around AI. Copilot’s talents include summarizing emails and turning a Word document into a PowerPoint presentation. The exact release date for Copilot remains a mystery, but some big businesses are already testing the tool.
Google is testing AI-generated Meet video backgrounds
Google’s been cooking up a new feature that’ll give your video calls a fancy makeover. Instead of staring at your cluttered office or a boring blurred out background, now you can pretend you’re chatting from a swanky living room or any other place you can dream up.
This isn’t just a bunch of pre-made pictures. Google’s using artificial intelligence to whip up these backgrounds on the fly. Want to set the mood? Just type what you’re after, say, “luxurious living room interior”, and boom! Your video call just got a major upgrade.
The feature’s not ready for everyone yet, Google’s still putting it through its paces in its Workspace Labs. If you’re one of the lucky few with access, changing the background is a cinch. Just click on an icon before you join the meeting, type in what you fancy, and pick from the options that pop up.
If you’re already in a meeting and feel like mixing things up, just head over to the “Apply vision effects” option in the menu.
With all these new AI goodies, Google’s squaring up against Microsoft 365’s AI Copilot suite. And they’re not just stopping at jazzy video call backgrounds. Google’s also testing things like AI-generated summaries and a feature that scans your docs to train itself as your personal assistant.
Microsoft launches vector search in preview, voice cloning in general availability
Microsoft’s bringing out some fresh AI goodies at its annual Inspire conference. The headliner is Vector Search, a new feature for Azure. Instead of searching for exact words, it finds what you need based on the ‘essence’ or meaning of words and images. It’s like when you’re playing charades and can’t think of the exact word but you know what it feels like. Imagine that, but for your database.
This tech turns words or pictures into a series of numbers (vectors) that represent their meaning, helping computers understand and find relevant stuff quickly. Several big names like Amazon, Google, and a few others already use this kind of search.
Azure’s Vector Search comes with its own bells and whistles, including the ability to provide personalized responses, recommend products, and find patterns in data. It can also be used to make chat-based apps that can search, convert images into vectors, and dig up useful info from massive piles of data.
Partnership with American Journalism Project to support local news
The American Journalism Project (AJP), a powerhouse in supporting local news, is joining hands with OpenAI, the brains behind ChatGPT, to see how artificial intelligence (AI) can give a boost to local news outlets. OpenAI is ponying up $5 million to aid AJP’s mission, plus up to $5 million in OpenAI API credits to give local news groups the chance to play around with AI tech.
AJP will use the OpenAI funds to experiment with AI in a couple ways:
- They’re setting up a Technology and AI Studio to figure out how to use AI in local news. The studio will coach AJP organizations, help them use AI tools, and get everyone talking about how to use AI to support quality journalism and stop the spread of false information. They’ll also be sharing what they learn as they go.
- They’ll give grants to about ten organizations to try out AI. The lessons learned from these groups will be shared with the whole local news community to show how AI can be best used.
- They’ll use the API credits from OpenAI to create and use tools with the technology.
AJP is all about fixing the local news crisis. They’re supporting a new generation of nonprofit local news groups across the country, and have raised $139 million to tackle the issue. OpenAI, which was founded in 2015, is dedicated to making sure AI benefits everyone.
Thousands of authors sign letter urging AI makers to stop stealing books
Around 8,500 authors are raising a ruckus, accusing big tech firms of using their work to train AI writing systems without permission or pay. These language models like ChatGPT, Bard, LLaMa are kinda like parrots, mimicking and spitting out authors’ styles and ideas. How these AI companies got hold of the books, whether from bookstores, libraries, or from less legit places, is unclear. But they definitely didn’t do the right thing and get licenses from publishers.
According to the authors, these AI writing tools could flood the market with low-quality, machine-written books and harm their livelihoods. This ain’t just hypothetical; there’s already been a surge of subpar AI-generated books on bestseller lists and publisher desks. The authors reckon it’s bad for them and even worse for up-and-coming writers, particularly those from underrepresented communities.
Their demands? Simple. Get permission, pay up for past and future use of their work in AI, and compensate for any use of their work in AI outputs. The authors aren’t threatening to sue yet, but they’re fed up with their work being used as free fuel for AI. As of now, there’s not much motivation for these tech giants to fess up and pay up. Most folks don’t realize that these AI language models are built on what’s essentially stolen goods. The authors, though, are ready to take a stand.
McKinsey partners with startup Cohere to help clients adopt generative AI
Big shot consulting firm McKinsey has teamed up with AI startup Cohere, aiming to bring AI tools to their corporate customers. This is McKinsey’s first dance with a major AI language model provider. They’re joining a crowd of global consulting firms looking to ride the AI wave started by the well-known, Microsoft-supported ChatGPT.
According to bigwig Ben Ellencweig from McKinsey, Cohere’s model takes into account important stuff like cost, intellectual property protection, user privacy, and how the model is taught, which makes it a top pick. McKinsey’s plan? Work with Cohere to make personalized solutions that boost customer interaction and automate tasks. They’re also looking to use Cohere to speed up their own operations and boost their knowledge system.
Cohere, a company built by some of the head honchos from Google’s AI research team, offers a neutral option for businesses to use AI models that aren’t tied to big cloud providers like Microsoft. They’re directly competing with OpenAI and are all about creating AI tools for businesses.
Cohere had a good run last month, raising a hefty $270 million from investors, and they’re currently valued at $2.2 billion. They’ve also buddied up with Oracle, which will be integrating Cohere’s AI tech into its products.
Futureverse raises $54M to marry AI and the metaverse
Futureverse, a tech whiz dabbling in artificial intelligence (AI) and the metaverse, has raised $54 million in its latest round of funding, with 10T Holdings leading the dance and Ripple joining the fun. Futureverse’s special sauce is a bundle of AI tools that can jazz up music, characters, and animations in the metaverse.
Futureverse has a grand vision: it wants to blend tech infrastructure and AI content to craft the dream metaverse, making this futuristic concept a practical, hands-on destination everyone can dive into.
The company’s big plan includes blockchain technology, and they’re super excited about this since the court ruling last week. They’ve silently scooped up 11 companies so far to build their strategy.
Recently, Futureverse unveiled an AI-powered game in partnership with FIFA, and another one with Muhammad Ali Enterprises. Aiming to be a top dog in AI gaming and metaverse content, Futureverse plans to use its recent funding to develop more ground-breaking tech, including its Futureverse Platform.
Teladoc expands Microsoft tie-up to document patient visits with AI
Teladoc Health is beefing up its partnership with Microsoft to use some of that smart AI tech for automating patient visit records on its virtual health platform. This move gave Teladoc’s stock a little 6% pre-market bump.
Teladoc plans to use Microsoft’s voice-powered AI system, Nuance Dragon Ambient eXperience, to automatically write down patient visits. This leaves doctors to just review and sign off on ’em.
Teladoc’s top doc, Vidya Raman-Tangella, said that this sorta paperwork and staff shortages are big reasons why many clinicians are quitting the profession.
Meta internal memo: ‘Regulatory debate around AI will intensify in coming months’
Meta, the company behind Facebook, predicts more government heat over how it handles artificial intelligence (AI) safety. This comes after a couple of US Senators quizzed CEO Mark Zuckerberg about a supposed leak of info related to one of their AI models. This juicy info comes from a company memo, written by Meta’s product boss, Chris Cox, which Moneycontrol managed to get their hands on.
Earlier this year, Meta decided to share the inner workings of its AI model, LLaMA, with some trusted AI scientists. But things went sideways when it turned out anyone could get it off the internet from places like BitTorrent. Despite this hiccup, Meta’s still keen on sharing LLaMA’s code with selected researchers, saying it’s a fine line between keeping stuff under wraps and playing open cards.
Senators Richard Blumenthal and Josh Hawley, who hang out in the Senate’s Privacy and Tech Committees, have asked the tech big shot what they’re doing to stop this from happening again or to lessen the impact of the AI model’s wide release.
UK’s approach to AI safety lacks credibility
The U.K.’s got big dreams to be an AI superstar, splashing the cash and hosting a fancy summit, but a report from the Ada Lovelace Institute suggests it’s all smoke and mirrors. The government’s turning a blind eye to the need for new rules to regulate AI and, at the same time, planning to cut back on data protection.
The report calls for a major rethink, dropping 18 suggestions for how the U.K. can get its act together. Among them, the institute says the U.K. needs a stronger, “expensive” definition of AI safety – focusing on actual harms caused by AI today, not future sci-fi problems.
The Institute sees the U.K.’s plan to let existing regulators handle AI with a set of loose principles, and no new resources, as a piecemeal approach. This is in contrast to the EU, which is busy crafting a solid, risk-based framework for AI.
Among the report’s recommendations are that the government should get a grip on data protection reforms, clarify the law around AI and liability, give regulators more resources to deal with AI harms, and set up an AI ombudsperson to help folks impacted by AI. The Institute also thinks the government needs to be more on the ball with new AI tech, maybe by requiring AI developers to give them a heads up about major new projects.