Introducing xAI, Elon Musk’s latest venture into artificial intelligence, set to shake up the tech world with its ambitious aim of unravelling the true nature of the universe.
Elon Musk announces a new AI company
Elon Musk, the head honcho of Tesla, is kick-starting a new venture in the AI arena called xAI. This new firm ropes in brainiacs from big shots like OpenAI and Google. Musk, who’s been vocal about throwing some brakes on AI and calling for tighter rules, said this startup is all about getting to grips with “reality”. However, it’s still hush-hush on the funding details, the company’s exact goals, and the kind of AI it’s going to zoom in on.
xAI’s mission, as per its website, is to decode the “true nature of the universe.” More intel on the company’s game plan may spill over a Twitter Spaces chat coming Friday.
Musk’s been a key player in OpenAI, the creators of the popular language model, ChatGPT. However, he’s been butting heads with them over accusations of a liberal bias and its cozy relationship with Microsoft.
The entrepreneur, who earlier this year pushed for a halt on “Giant AI Experiments”, thinks an AI watchdog should be established to ensure public safety. He’s also taken issue with AI companies over their data scraping practices to train chatbots and believes Twitter, which he owns, should get a fair shake for its data.
After Musk took over Twitter, some big-name users like Shonda Rhimes, Gigi Hadid, and Stephen Fry checked out of the platform in protest over the sweeping changes he made.
Google’s AI-powered notes app is now called NotebookLM, and it’s launching today
Google’s AI-powered notes app, previously dubbed Project Tailwind, has gotten a makeover and a new name: NotebookLM, short for Language Model. Google’s going for a small, US-based audience launch to start. NotebookLM is your own pocket-sized AI assistant, trained on your personal data and notes, to help you make sense of them.
The app begins its life in Google Docs but promises to handle more formats soon. You can select a bunch of docs, then use NotebookLM to fire questions or even generate new content from them. Google gives examples of the app’s usage, like automatically summarizing a long document or transforming a video outline into a script. It’s particularly handy for students, as they can ask for summaries of their weekly class notes or a rundown of what they’ve learned about a certain topic.
Now, Google isn’t alone in this space. Other tech players like Dropbox, Mem, and Notion are also creating hyper-focused AI tools. Google’s trying to make its AI stand out by training it on user-specific data to improve responses and avoid confident but wrong answers. The app also offers citation features for easy fact-checking, but beware – if you feed it wrong info, it’ll regurgitate wrong answers.
As for privacy, Google states that NotebookLM only uses the documents you upload, with no data sharing or use for training new AI models. It’s a delicate dance between surrendering personal info to AI for convenience and guarding private data.
8 Microsoft Advertising Updates Including Predictive Targeting And Generative AI For RSAs
Microsoft Advertising is bringing eight new tricks to the table this July.
First, they’re rolling out Predictive Targeting, a clever tool that uses artificial intelligence (AI) to find and connect with new audiences. It’ll help boost ad conversions, but there’s a risk of wasted money or harm to the brand if the wrong folks see your ads.
Second, they’re using generative AI to create and edit responsive search ads (RSAs). It provides AI-made headlines and descriptions based on the advertiser’s final website address. This tool will even dish out suggestions in 35 languages and allows you to auto-generate ad assets.
Third, they’ve added IF functions for RSAs to help target ads and customize them based on device and audience. This means no need for separate campaigns and tailored messages for specific devices or audience groups.
Fourth, they’re launching automated multimedia ads within Dynamic Search Ads groups. They’ll use AI to create attractive ads optimized for performance using your website’s content.
Fifth, they’re extending Property Promotion Ads to include vacation rentals. These eye-catching ads aim to get potential travelers excited about a property. They also offer advertisers more control over images and callouts.
Sixth, Microsoft Advertising has enhanced its Universal Event Tracking (UET). Now you can troubleshoot and monitor UET events in real-time, and the UET overview tab offers a longer lookback period.
Seventh, data-driven attribution (DDA) reporting is now generally available. It uses machine learning to measure the real contribution of each ad interaction on conversion, a departure from the traditional Last Click Attribution model.
Lastly, they’re dropping several old features in Keyword Planner due to their outdated nature and incompatibility with the system, effective August 21, 2023.
So, it seems like Microsoft Advertising is making moves to provide smarter and more efficient ad services. They’re giving advertisers a chance to reach more customers more effectively and at a lower cost. Advertisers should stay tuned for more improvements and not forget to leave feedback.
Sapphire Ventures plans to invest over $1B in enterprise AI startups
Big shot investment firm Sapphire Ventures is lookin’ to plow over a billion bucks into startups all about that AI, which is artificial intelligence, for businesses. It’s pullin’ the dough from its existing money pot, which holds $10 billion, with about $3 billion ready for some action.
They’re mainly eyein’ companies that create business software, but the cool part is, it uses AI to predict outcomes better. They’re also backing AI projects to beef up earnings in specific areas like manufacturing and healthcare.
Sapphire’s big boss, Nino Marakovic, believes AI is a game-changer. He’s stoked to support this new generation of business trailblazers. They’re also working on creating an “AI Community” for company bigwigs in their portfolio.
But Sapphire ain’t the only one playin’ this game. A bunch of others like Salesforce Ventures, Workday, OpenAI, Dropbox, and AWS are also putting some serious green into AI startups. The business consulting giants Accenture and PwC are also stepping into the ring with billions in AI investments.
Nvidia invests $50 million in biotech company Recursion for A.I. drug discovery
Nvidia, the chip-making big gun, is pumping $50 million into Recursion Pharmaceuticals, a biotech company using artificial intelligence (AI) for new drug discovery. After this news hit the street, Recursion’s stock flew to the moon with an 80% increase, and Nvidia’s stock saw a small but neat uptick of over 2%.
Recursion uses AI to hunt down and design new treatments, and its services are already in use by major players like Roche and Bayer. With its base in Salt Lake City, Utah, Recursion will use its mountain of over 23,000 terabytes of biological and chemical data to get its AI models smart on Nvidia’s cloud platform.
The end game? Nvidia could license these AI models on BioNeMo, its cloud service aimed at AI in drug discovery. Recursion is hoping to use this to push its own drug pipeline, and those of its partners. It’s already got five drugs being tested on humans.
Voice cloning platform Resemble AI lands $8M
Resemble AI, a company that makes tech that copies voices, just bagged $8 million in funding. This cash will help the company make more products and double its team to more than 40 folks by year-end.
This company’s technology is being used by big-name media companies to make content that was never possible before. Founded in 2019, it began with a focus on video game voices but has grown to include a bunch of other cool stuff. For instance, their tech can “transfer” voices into other languages, make personalized messages from voice actors, and even create chatbots.
Resemble AI tries to keep things on the up and up. They make users give clear permission to clone their voices and have rules to prevent the tech from being used for no-good purposes. They also have tech to validate whether audio is real or fake and to add hidden, identifying tones to the voices they create. With over a million users and a huge amount of audio created, the company sees these tools as key to their success.
Bill Gates explains why we shouldn’t be afraid of A.I.
Bill Gates, Microsoft co-founder, sees potential in artificial intelligence (AI) – like that used in ChatGPT. Despite acknowledging potential issues like fake videos (deepfakes), biased computer programs, and academic cheating, Gates is optimistic we can fix these issues. He highlights that no one has all the answers about AI risks, but insists that AI’s future isn’t as scary or rosy as people make it out to be.
His balanced perspective might sway the conversation about AI, moving it away from end-of-the-world fears and towards reasonable regulations addressing present risks. With governments globally scratching their heads about how to control AI and potential pitfalls, Gates’s voice could be a game-changer. After all, he’s not just anyone – he’s heavily linked to Microsoft, a company with hefty investment in AI.
In his blog post, Gates notes how society has adapted to major changes in the past, like the rise of handheld calculators and computers in classrooms, and argues we can do the same for AI. His vision for AI regulation? Something like “speed limits and seat belts.” He uses the automobile as an example, stating that after the first car crash, we didn’t outlaw cars, but instead introduced safety standards and rules.
While recognizing the challenges AI might pose, such as how it might affect jobs and the tendency for AI systems like ChatGPT to make up facts, he remains hopeful. Citing the problem of deepfakes, he suspects people will improve at spotting them, and gives a shout-out to deepfake detectors being developed by Intel and DARPA, a government funding agency. He suggests a clear regulatory framework to dictate what kinds of deepfakes can be legally created.
Google AI health chatbot passes US medical exam: Study
Google’s AI-powered medical chatbot, Med-PaLM, scored a passing grade on the tough US Medical Licensing Exam (USMLE), a crucial test for future doctors. While promising, the bot’s responses are still playing second fiddle to human physicians, says a fresh study.
Med-PaLM, however, isn’t yet public. Google boasts it’s the first of its kind to pass the USMLE, a test where a pass is around a 60% score. In comparison, Med-PaLM scored a 67.6%.
Yet, the bot’s got room for improvement. It still dishes out ‘hallucinations’, or false info, leading Google to create a new check system.
In the future, Med-PaLM might be a support system for doctors, offering fresh solutions. As for now, it’s being tested at the renowned US Mayo Clinic for more routine, lower-stakes tasks, steering clear of direct patient contact.
AI Could Quickly Screen Thousands of Antibiotics to Tackle Superbugs
Scientists are turning to artificial intelligence (AI) to help develop new antibiotics for fighting superbugs, those bacteria that are resistant to most drugs. A recent study from MIT and McMaster University used an AI program to pinpoint an antibiotic that could kill a tough bacterium, which often leads to serious illnesses like meningitis and pneumonia.
The great thing about AI is that it can speed up the process of creating new antibiotics and cut down the cost, by figuring out which compounds might work without having to run tons of experiments. This is particularly important because we’re in a fix right now – antibiotic resistance is growing, but we’re running out of new antibiotics.
Using AI, the scientists were able to test thousands of potential drug compounds against the stubborn bacterium, and they found nine potential antibiotics, including one called abaucin. The cool thing about abaucin is that it’s “narrow spectrum,” meaning it only kills specific bacteria. That’s good because it doesn’t mess with other bacteria and doesn’t throw off the balance of bacteria in our bodies.
Cyber Attackers Can Disable AI Systems By ‘Data Poisoning’: Google AI Expert
Google Brain’s Nicholas Carlini is sounding the alarm: cyber attackers can mess up AI systems using a trick called “data poisoning.” By tinkering with a small piece of the AI’s learning data, these baddies can make the AI go haywire.
This ain’t just some science fiction anymore – it’s real, Carlini warned at a recent AI conference. In a nutshell, “data poisoning” is when folks slip some rotten samples into the AI’s study materials. It’s like a bad apple spoiling the bunch, making the AI act all sorts of wrong.
Carlini says it only takes messing up 0.1% of the data to throw the whole AI off track. This used to be considered a mind game for nerds, but it’s time we wake up and smell the coffee – it’s a genuine threat with serious real-world effects.
For the uninitiated, “data poisoning” is when someone sneaks into an AI’s learning database and plants false or confusing info. The AI, like a student studying from a tampered textbook, learns wrong stuff, leading to all sorts of unforeseen, and often harmful, results.