Google’s tweaked its privacy rules so it can collect and use the stuff folks post online, all to help train its AI systems. They’re shifting from just improving language models to beefing up all their AI goods. This means everything from Google Translate to cloud AI services could use our public online posts to get smarter.
This move has privacy buffs in a twist. While anyone can see what we post publicly, the question of how that info’s used is now on the table. The whole “can they do this?” issue is still up in the air, so expect the courts to have a field day figuring it out.
Web scraping, where tech companies hoover up data from the web, is another hot-button issue. Bigwigs like Elon Musk are sounding the alarm, linking it to Twitter glitches. With companies like Google and Twitter in the mix, web scraping’s at the heart of the debate about how our data’s used and kept safe.
If you’re antsy about Google’s new rule, you can take steps to keep your data safe. Only post stuff you’re okay with Google using. Tweak your privacy settings. Use different services instead of Google’s. Go incognito. Read privacy policies like a hawk. And let Google know if you’re not cool with how they might use your data.
In short, as AI gets more complex, tech companies are hungrier for data. But that needs to happen in a way that’s legal, ethical, and transparent. We should also think hard about what we put online – who knows how it might be used down the line. AI has lots to offer, but there’s plenty of bumps on the road to navigate.
Data scientists describe the arduous, AI-powered task of getting Amazon’s Alexa to speak in an Irish lilt
Two Amazon data scientists managed to teach Alexa to chat with an Irish twang. They used a kind of artificial intelligence that can mimic human speech and played with accents. This cool tech has been really taking off recently and is used in stuff like audiobooks and ads.
The task to make Alexa talk Irish was a tough cookie. They had to zero in on specific bits of speech, like tone and rhythm, a process they call “voice disentanglement.” The idea is to make voice assistants sound more diverse, more real.
When they worked on Alexa’s Irish makeover, which hit the market last November, they put in hours of language training with voice actors. But AI helped speed things up by mimicking local accents and dialects.
They started with the existing British Alexa, and trained it on Irish accents using text-to-speech and voice recordings. They then fiddled with certain sounds, like beefing up the “r” sounds the way an Irish person would. The result? An Alexa that can now charm you with an Irish lilt.
Konux gears up to scale its AI + IoT play for optimizing the railways
Munich-based tech company Konux is shaking up the rail game by using Artificial Intelligence (AI) and the Internet of Things (IoT) to improve railway infrastructure. Their goal? Make rail travel, the most eco-friendly mass transit method we’ve got, even more efficient. Their secret weapon? A software-as-a-service that uses sensors and AI to predict maintenance needs. With governments and rail operators looking to digitize networks, Konux is stepping up to the plate.
Their tech measures vibrations in the rails to spot potential problems. These AI-driven predictions have a 90% accuracy standard, says CEO Adam Bonnifield. This lets rail operators anticipate issues before they happen, making for smoother operations. For passengers, this means fewer delays and more on-time trains. On top of that, Konux offers tools for monitoring network usage and smarter scheduling.
The big picture? Doubling the capacity of current rail networks without building more track, which could be a game changer in our fight against climate change. Today, Konux tech is only monitoring a part of the rail networks, but they’re aiming for full coverage and max impact. Bonnifield believes their tech is an essential part of the solution to meeting global climate goals. After all, if we can move more people and goods on existing rail lines, we’re reducing our carbon footprint significantly.
AI startup NoTraffic raises $50M to tackle congestion, road safety
AI traffic tech whiz NoTraffic just scored a cool $50M in a Series B funding round, aiming to shake things up in the world of city traffic management. The Israeli-based start-up is on a mission to make traffic lights smarter, saying goodbye to the old timer system and hello to data-driven traffic control. The system uses nifty sensors and V2X chips (basically, it’s a way for everything on the road to communicate with each other), to paint a full picture of traffic movement and make smart decisions to keep things moving smoothly.
It’s not just about keeping traffic flowing though. NoTraffic’s tech also wants to help prevent accidents. If a car’s about to blow through a red light, or a pedestrian steps into the street, the system can give a heads up to connected vehicles nearby to prevent a collision.
From only a few U.S. cities under its belt in 2021, NoTraffic’s grown like a weed. It’s now in cahoots with a hundred transportation departments across 13 states, including California and Texas. With this new pile of cash, it’s looking to set up shop in new markets like Japan and the U.K., aiming to double its reach in 2023.
New AI translates 5,000-year-old cuneiform tablets instantly
A team of brainy types who’ve made an AI that can translate Akkadian like lightning. Akkadian used to be the lingo of an empire around 2300 B.C., but it’s dead as a doornail now. They used a writing style called cuneiform, pressing wedge shapes into wet clay then baking it hard. This clay is tough as nails, so we’ve got a ton of their texts still around, unlike some other ancient stuff.
Translating Akkadian is a two-step dance. You’ve got to turn the cuneiform into sounds we can understand, then you’ve got to translate those sounds into a modern language. But the team’s AI cuts through this like a hot knife through butter. They taught it using loads of cuneiform texts, and it learned to translate both from sounds and from the symbols directly.
When they tested it, the AI scored solid marks, showing it could keep up with the style of different types of texts. It’s a promising start, though the AI sometimes mucks up or goes off the rails. For now, it works best on shorter, more standard texts. The team hopes with more training, it can work as a sidekick to human experts, speeding up the work and helping to unlock the secrets of ancient Mesopotamia.
Ivy League university unveils plan to teach students with AI chatbot this fall
This fall, students at a top-tier U.S. university, Harvard, will find a new type of teacher in their programming class: an AI chatbot. Professor David Malan, who’s running the show, sees this as a fresh twist on traditional teaching methods. This step towards using AI in education comes as the field of artificial intelligence has taken off, changing the tech game big time.
Though the concept sounds exciting, there are a few red flags. Martin Rand, a tech leader from PactumAI, has a word of caution. He says these AI systems are statistical models, and while they can give the most likely answer, they may not offer the best one. Rand adds that human teachers will still be crucial in ensuring excellence, and using AI chatbots for basic level courses seems like the right move to him.
Despite these worries, Rand believes there’s a bright side. The chatbot could stimulate growth, spark innovation, and enhance learning. According to the university’s newspaper, The Harvard Crimson, Professor Malan sees the use of the AI chatbot in the intro to programming course as part of their tradition of bringing new tech into the curriculum.
The professor hopes the AI tool will make it feel like each student has their own personal teacher. It’s designed to be a 24/7 resource that can support students’ learning at their own pace. According to the report, the AI chatbot will help students spot mistakes in their code, answer questions, give feedback, and assist in learning the coding process.
Police turning to artificial intelligence for traffic help
Down in Fort Mill, S.C., the York County Sheriff’s Office is teaming up with local company, EPIC iO, to take a bite out of event traffic problems using artificial intelligence (AI). They’re setting up security cameras with AI that can provide real-time traffic information to the cops.
They’re testing this tech at the Carowinds’ Fourth of July shindig, where about 60,000 folks are expected. The goal is to keep cars moving smoothly and let officers respond faster to any issues.
These high-tech cameras can count cars, pedestrians, and even spot jaywalkers. This info helps cops direct traffic and keep an eye out for trouble, pronto.
EPIC iO’s big plan is to help law enforcement hit ‘Vision Zero’—that means no pedestrian fatalities by 2030. To reach this, they’re giving public safety a helping hand with their AI tech.
Despite some folks being wary about AI, EPIC iO is working hard to clear up misunderstandings and show how their tech can help cops, especially when resources and budgets are slim.
Grammys boss Harvey Mason Junior says music ‘with AI-created elements’ is ‘absolutely eligible for entry’
Heads up, AI is rockin’ the music industry and the Grammy’s bigwig, Harvey Mason Jr., is in on it. He said tunes that pack a punch with AI-blended sounds are fair game for Grammy nominations. Only catch? The tune can’t be 100% robot made. No human touch, no dice.
This is straight from the Recording Academy’s new playbook, which caused a real ruckus when it announced that you can’t bag a Grammy without a human touch. The rule revamp also means folks need to pitch in at least 20% to an album to snag a nomination, whereas before, even the smallest contribution could get you a ticket to the big time.
Clearing the air, Mason Jr. stated AI isn’t gonna bag a Grammy itself, and while an AI-fronted track can strut its stuff in songwriting, it can’t step up in performance. If a human belts out a ditty in the studio, but the lyrics or beats were cranked out by AI, it’s a no-go for the composition or songwriting category.
He also hinted we might see tunes made with a bit of AI wizardry in the Grammy nominations next year. The 2024 Grammy Awards are all set to kick off in Los Angeles on Sunday, February 4. Be there or be square.
Stability AI CEO: There Will Be No (Human) Programmers in Five Years
Emad Mostaque, head honcho at Stability AI, has a bold forecast that might ruffle a few feathers: in half a decade, we won’t have human coders anymore. According to him, artificial intelligence (AI) is making giant leaps and bounds that’ll push human programmers out of the scene.
Chatting with Peter H. Diamandis on the Moonshots and Mindsets Podcast, Mostaque shared his thoughts on the AI industry and its future. And his insights are something to mull over. GitHub stats reveal that nearly half of all code today is whipped up by AI, showing just how quickly it’s getting a leg up on human coders.
Stability AI, known for Stable Diffusion, their top-of-the-line open-source image generator, isn’t content to rest on its laurels. They’re setting their sights on all sorts of projects, from cracking the code of protein folding, DNA analysis, and chemical reactions, to mastering language and visual data processing. Their grand vision? Building the nuts and bolts for a “society OS.”
And if you’re thinking about what’s around the corner, Mostaque predicts you’ll have ChatGPT on your mobile phone, even without an internet connection, by the end of next year. This could usher in a new era of how we shoot the breeze with our devices.
Japan publishes guidelines allowing limited use of AI in schools
Japan’s education bigwigs gave a thumbs up to using smarty-pants tech like ChatGPT, a chatbot, in their schools, but they’ve got some ground rules. They’re mostly cool with it in middle and high schools but they’re a little antsy about the little ones using it in elementary school.
They’re calling out any kid who tries to pass off work done by AI as their own—straight-up cheating, they say. But they’re also kinda worried that letting AI do the heavy lifting might put a damper on kids’ own brain muscles. They’re gonna keep things on a short leash to start, only letting a few schools give it a whirl, and then they’ll tweak the rules based on how that goes.
They’re hopeful that AI will help kids learn better, but they’re also wary. They know it might lead to private info getting spilled, breaking copyright rules, or even snuffing out kids’ own creative spark. They want to be sure kids know that there’s a right and wrong way to use this tech, and that they need to be careful about not giving away personal stuff or stepping on copyrights.
The rules are pretty clear about when it’s not okay to use AI, like during tests or to do your own work for you. But they also see the good side, like how AI can give fresh ideas for class chats. They want students and teachers to keep their eyes open to any bloopers the AI might spit out.