By barring Google-Extended in robots.txt, you can now prevent Bard and VertexAI from accessing your website or specific pages.


Google introduces Google-Extended to let you block Bard, Vertex AI via robots.txt

Google’s rolled out this new thing called Google-Extended. It’s like a tool for website folks to decide if they want to let Google’s AI tools, Bard and Vertex AI, peep their site content. Bard’s what Google uses to chat, and Vertex AI helps build those chat and search applications.

So, how does it work? It uses something called robots.txt. This is like a note you put on your website to tell Google-Extended to back off and not access your content. If you’re running a website and you want to keep Google-Extended away, you just add a bit of text to your site’s robots.txt file. It’s like putting up a “no trespassing” sign but for Google’s new tool.


‘Biggest act of copyright theft in history’: thousands of Australian books allegedly used to train AI model

Thousands of Australian books may have been caught in what’s being called “the biggest act of copyright theft in history.” U.S.-based Books3 dataset is accused of pirating these works to train AI for big corporations like Meta and Bloomberg. Richard Flanagan, a well-known novelist, found 10 of his books on the Books3 dataset and called it shocking. He, along with other authors, is upset because their works were used without permission.

The Australian Publishers Association says around 18,000 titles seem to be affected by this copyright mess. This has led to a huge legal and ethical problem for authors and publishers around the world. The Australian Society of Authors and other authors are horrified and demand transparency and compensation from tech companies profiting from AI trained with their works.


Mayo Clinic to deploy and test Microsoft generative AI tools

Mayo Clinic is teaming up with Microsoft to test out a new tool called Microsoft 365 Copilot, designed to make work easier. This tool uses AI to help with productivity. Mayo Clinic, being a big deal in healthcare, is trying this out with their doctors and health workers to see how it can help them in their daily tasks.

This tool, Microsoft 365 Copilot, can help automate some of the boring stuff like filling out forms, letting doctors and other staff focus more on taking care of patients. They are using it with popular apps like Outlook, Word, and Excel, helping turn words into a tool to get stuff done more efficiently.


Medium hints at a nascent media coalition to block AI crawlers

Medium, the online publishing platform, is giving the boot to OpenAI’s GPTBot, saying it can’t scrape their web pages for content anymore. They’re upset ’cause AI companies, like OpenAI, are making bucks using writers’ content without asking or paying them. This is joining the moves of CNN, The New York Times, and others who have already blocked this web robot. However, it seems TechCrunch hasn’t jumped on this bandwagon just yet.

This could be a game-changer, making it tougher for AI platforms to exploit content. The thing is, it’s slow-going getting industries to team up due to all the red tape, unsettled legal, and ethical issues. Especially when AI is the new kid on the block and everyone’s still figuring out the rules of the game.


AI predicts how many earthquake aftershocks will strike — and their strength

Seismologists are getting better at forecasting earthquakes using machine learning. These new methods aren’t about pinpointing the exact time and place of an earthquake – no one can do that. Instead, they help predict stuff like how many aftershocks might come after a big quake. 

Scientists used to rely on basic data about past quakes, but now, three new studies are using fancy tech called neural networks. These new approaches seem better at predicting quake patterns. One method even did a great job analyzing quakes in California from 2008-2021. 

Another did well with quakes in Italy in 2016-17, and a third aced tests with 30 years of Japanese quake data. Experts are optimistic, but we’re not at a game-changing moment yet. Over time, agencies like the USGS might start using these new methods more, especially since there’s a ton of earthquake data available nowadays. 


DataStax takes aim at event-driven AI with open source LangStream project

DataStax has a new thing called the LangStream project. It lets folks handle real-time streaming data better. Instead of waiting for data, they get it right when it happens. Think of it like getting play-by-play updates during a game instead of just the final score. The big win here? Applications can act immediately when they get new info.

LangStream used to only play nice with DataStax’s own database, but now, it’s branching out and working with other databases too. In simpler terms, it’s kinda like making a universal remote that works with various TVs.

How’s LangStream do its magic? It uses Apache Kafka, a popular tool for moving event data fast. When data comes in, LangStream processes it and spits it back out, making sure AI tools can use it right away.


Disease X: How AI could help plan our response to future pandemics

Scientists are figuring out if tools like AI can help us plan for future disease outbreaks. Researchers are jazzed about AI’s knack for spotting early warning signs in big health data piles. But Alain Labrique from the World Health Organization says we gotta make sure our AI tools aren’t biased and are working with diverse data.

At Yale, some brainiacs developed an AI tool that can guess how bad a disease might hit someone and how long they’d be in the hospital. This tool uses signals (called biomarkers) from the body to make these predictions. The goal? If another disease hits fast, this AI tool could quickly help hospitals prep by analyzing early data.

In the UK, they’re using AI to decide when might be a good time to take actions like lockdowns or mask mandates. Rachel Dunscombe says with the right data, AI can give a heads-up on what might happen next.

Virginia Tech researchers, meanwhile, are trying to get AI to think like humans during an outbreak. Like, what might we decide to do if a virus starts spreading? They created a fake town with a fake virus and saw how AI ‘people’ might act.