Amazon steps up its game with a staggering $4 billion investment in AI start-up, Anthropic, aiming to secure a pivotal role in the rapidly evolving tech landscape


Amazon to invest up to $4bn in AI start-up Anthropic

Amazon’s throwing up to $4 billion into AI startup, Anthropic, marking another step in its techie tug-of-war with big names like Microsoft and Google. They’re dropping an initial $1.25 billion for a minor slice of the company, with a chance to up the ante later.

This deal has Anthropic swinging Amazon’s way, a shift from its $300 million dance with Google seven months ago. For Amazon, it’s a chance to ride the wave of excitement around AI that can cook up human-like text and pics, positioning its chips as top contenders against Nvidia’s.

Anthropic’s got Claude, a chatbot giving ChatGPT a run for its money, already set up on Amazon’s services, allowing folks to create AI apps in the cloud.

Despite this, Anthropic’s co-founder, Dario Amodei, says it’s all good with Google, no change there. And for Amazon, this deal is a chance to flex its muscles against Nvidia, showing off its chips and hopefully shaking off any talk of it lagging in the AI game.


Spotify’s AI Voice Translation

Spotify’s cookin’ up something cool: a new feature that translates podcasts into different languages using AI, all while keeping the original podcaster’s voice and style. This means you could be listening to your favorite shows in your own language, but still hear the real voice and style of the person who made it! 

They’re testing this out with a few big names like Dax Shepard and Monica Padman, rolling out episodes in Spanish, French, and German to start. It’s all about making it easier for folks around the world to get into new podcasts and hear stories in a way that feels real and personal. This feature is rolling out for both Premium and Free users, and they are starting with a few episodes but planning to add more as time goes on. 


ChatGPT update enables its AI to “see, hear, and speak,“ according to OpenAI

OpenAI has jazzed up ChatGPT so it can now react to pictures and have real-deal spoken convos on the mobile app. They’re rolling out these cool tweaks for Plus and Enterprise users in a couple of weeks. So, with the new features, you can show it pics and chat about them, handy for everyday hitches like figuring out dinner or grill troubles.

However, OpenAI ain’t spilled the beans on all the techy how-to details, but it’s likely they’ve found a slick way to make sense of both text and images together. And when it comes to chattin’, there are new synthetic voices for more natural convos, thanks to OpenAI working with professional voice actors.


Microsoft signs deal to serve sponsored links in Snapchat’s My AI

Snapchat’s My AI, a chatbot used by 750 million people monthly, is teaming up with Microsoft. They are going to start showing sponsored links to users, which is a big deal for advertisers aiming to reach a ton of folks, especially the young ones. Advertisers will set up their ads to show up on mobile devices prominently. 

Microsoft’s advertising tools will help track how well the ads are doing, and so far, they’re seeing good results with more people seeing ads on mobile. Microsoft’s planning to expand this feature to more partners. In short, if you’re chatting on Snapchat, expect to see more relevant ads popping up, powered by Microsoft.


Getty Images promises its new AI contains no copyrighted art

Getty Images is backin’ up its new AI system, made by Nvidia, saying it’s all clean of any copyrighted content. It’s all trained on their own library, no internet-scraped stuff. Craig Peters, the big boss at Getty, says companies can use it worry-free; they’ll even handle any legal trouble that pops up.

Getty is paying artists in a Spotify-like way, and the creators and the folks in the images are all cool with their stuff being used. But figuring out who gets paid for what could get messy, according to a prof specializing in AI and law.

Getty’s AI isn’t gonna know real people or places, so no deepfake worries. Peters says some tech folks claim AI needs copyrighted content and artists can just opt out, but he’s calling those claims nonsense. He reckons some folks are genuinely mindful about this stuff, but others are just in it for a quick buck.


Google Launches Free & Paid Generative AI Training Courses

Google’s rolling out some new AI training stuff, and it’s divided into two parts: one for folks who aren’t tech wizards and one for the real computer whizzes out there.

So, the first one is called “Introduction to Generative AI”. It’s free, and it’s basic—no crazy technical stuff. It’s perfect for people in sales, marketing, HR, those kinds of jobs. You can snag a digital badge in about two hours showing you know the basics of Google’s AI. 

Now, the second part, “Generative AI for Developers”, that’s the nitty-gritty stuff. It’s for app developers and the like. But this one ain’t free. You gotta have Google Cloud credits to dive into the deeper technical labs. You can also get a subscription to access all the AI content and a year’s worth of training.

They’ve even teamed up with DeepLearning.AI for a new course, “Understanding and Applying Text Embeddings with Vertex AI.” It’s another freebie and gives insights into classifying and detecting unusual data, grouping text, and searching semantically.


Microsoft is hiring a nuclear energy expert to help power its AI and cloud data centers

Microsoft’s got plans to bring in a nuclear energy expert. Why? Because their AI and cloud data centers need a ton of power, and they’re exploring nuclear reactors to keep the lights on. They want small, modular ones, since they’re quicker and cheaper to build than the old school, massive ones.

They’ve thrown up a job post looking for a top dog to take charge of this nuclear venture. This person will steer the ship in adopting and rolling out this nuclear strategy globally. We’re talking about nuclear fission here, where atoms split and release loads of energy.


AI Meets Med School

The University of Texas at San Antonio is mixing med school with tech, offering a new dual-degree that pairs medicine with artificial intelligence. Aaron Fanous, a med student with a techy side, is among the first to join. He sees big potential in combining medicine and AI, especially as tech starts playing a bigger role in healthcare. 

While other colleges have been dabbling in AI, this program stands out. Students will split their time between UT Health and UT San Antonio, snagging both a medical degree and an AI master’s in five years. They can even deep dive into computer science, data analysis, or autonomous systems. The goal? Make sure doctors have a say in how AI shapes their field. 

The program has been in the works for four years and aims to bridge the gap between tech and medicine. The hope is that this kind of program spreads, with other colleges catching the vision. And this might just be the start – even the dental school’s showing interest!


Is depression lifting? AI that interprets brain waves has answers

Researchers have used AI to find a brain signal that could show if folks are recovering from depression. This work is based on a method called deep-brain stimulation (DBS), where they put electrodes in the brain to change neural activity, kinda like rewiring it.

So, in this study, they stuck this DBS device with sensors in the brains of ten people suffering from hardcore depression. These sensors measure brain activity. After about six months of brain zaps, almost everyone felt better – seven out of ten were even considered in remission.

The team used AI to study the brain activity data and found a specific brain signal that showed whether the person was depressed or recovering, with over 90% accuracy. This could mean that doctors could have real-time data, helping them switch up the treatment plan if someone’s not getting better, instead of relying on patients’ words.


Revolutionary AI Set To Predict Your Future Health With a Single Click

Scientists from Edith Cowan University created a software that checks bone scans super fast to spot a sign of heart risks called abdominal aortic calcification. This isn’t just any heart risk; it’s the buildup of calcium in a main artery that can signal major problems like heart attacks. These same scans are used to look for weak bones. This new tech can breeze through about 60,000 scans in a day.

A bunch of top-notch universities from around the globe worked on this. While this isn’t the first tool of its kind, it’s the biggest study, used the most popular scan machines, and got tested in the real world. Out of over 5,000 images, both the experts and the software agreed 80% of the time on how bad the AAC was. And only a tiny 3% of the worst cases got a wrong low-risk rating from the software.

Professor Lewis pointed out that this software is just getting started. It’s version 1.0 and they’re already making it better. If it keeps up, it could be a game-changer, finding heart problems before folks even feel sick. This means people can start making healthier choices way earlier.


These new tools could make AI vision systems less biased

Computer vision systems, used for tagging images and detecting objects in photos, often show bias, especially towards people of color and women. Researchers at Sony and Meta are developing tools to measure and address these biases. Sony’s tool examines skin tone in two dimensions, considering color and hue, providing a more nuanced approach than the traditionally used one-dimensional Fitzpatrick scale, which often oversimplifies and misrepresents skin tones.

Meta’s tool, called FACET, assesses fairness across various computer vision tasks and includes a broad range of fairness metrics. It utilizes a diverse and detailed dataset of 32,000 human images annotated with various attributes and actions to evaluate biases in AI models. These evaluations highlighted significant disparities; for instance, models were more accurate at detecting lighter-skinned individuals.


AI-Generated Naked Images Of Minor Girls Spark Outrage In Spanish Town

A town in Spain is in an uproar after fake, AI-created, naked pictures of minor girls got spread around online. These pictures, some showing girls as young as 11, were made using an app that put the girls’ faces, taken from their social media, onto other bodies. About 20-30 girls, aged 11 to 17, have reported being victims of this sick stunt.

María Blanco Rayo, whose 14-year-old daughter is one of the victims, and Dr. Miriam Al Adib, another parent of a victim, are speaking out. They’re saying these twisted pictures look real and are causing a lot of harm. Dr. Al Adib is urging girls and parents to report these acts and join a support group they’ve set up.