A Groundbreaking Tool That Lets You Seamlessly Transform Your Imagination into Dynamic, Eccentric, and Engaging GIFs Ready for Sharing Across Various Platforms
Animated GIF generator from Picsart makes AI fun again
Picsart, that well-liked image editor, has just dropped an AI-driven animated GIF maker. If you’re old enough to remember last year, think back to the time when DALL-E was the bees knees, creating pictures from text. Well, Picsart’s tool goes one better – it makes moving pictures. Yup, animated GIFs. No more hunting for the perfect It’s Always Sunny in Philadelphia meme – you can cook up your own crazy scenarios.
The tool’s a real hoot, and it’s easy as pie to use. Just feed it your wild thoughts and voila, out pops your very own madcap GIF. It’s all part of the regular Picsart app, available for iOS, Android, and even web. Download and share the funnies to your heart’s content.
Don’t go expecting anything close to real-life visuals, though. The tool leans more toward the Looney Tunes side of things. But that’s all part of the shenanigans. So why wait? Dive in and start stirring up some GIF goofiness! Just remember, this ain’t the tool for crafting your next big-budget comic book TV show opening. Now, go have a blast!
Humane’s aspiring smartphone-killer ‘Ai Pin’ may be the most 2023 product yet
Humane, a company with a dream to replace smartphones, is staffed up with a bunch of former Apple folks. They’re coming up with a cool new gadget, the ‘Ai Pin‘, which they’re developing alongside Qualcomm Technologies. The gizmo is loaded with smart features that don’t need an internet connection, thanks to Qualcomm’s nifty AI chips.
Humane’s Ai Pin aims to create a top-notch AI experience, stuffed full of smart features right in the device. Its sleek look hides some serious power. It can figure out real-time information and provide a fresh, fun experience for the person using it, according to Qualcomm’s VP of Business Development.
The device was shown off back in April. It fits neatly in your pocket, can project things like caller ID and messages onto your hand, and even translates your voice in a jiffy, making it sound like you’re speaking another language.
Mozilla Developer Network adds AI Help that does the opposite
Mozilla Developer Network (MDN), a go-to for web developers, just added an AI assist called AI Help, and let’s just say, it ain’t all that helpful. A GitHub Issue was opened up by a developer named Eevee, pointing out the bot’s not delivering the right info. Eevee even went as far as saying MDN was telling “convincing-sounding” lies and they wouldn’t trust or use MDN until the AI was nixed.
AI Help was supposed to be a game changer, meant to make info search easy for users. It digs through the MDN’s vast documentation to give a quick and concise summary. There’s also an AI Explain feature, which is meant to shed light on webpage text, but it seems to be stirring the pot, with some even claiming the general AI Help gives wrong answers too.
Lots of developers aren’t thrilled with their new AI buddy. Some say it contradicts itself, messes up CSS functions, and generally doesn’t understand CSS. A few are worried it’ll make users too dependent on flimsy text generation.
MDN has yet to comment, but it seems like an MDN core maintainer, sideshowbarker, has clocked the issue. They’ve promised to bring it up with Mozilla and get it removed ASAP. So far, it looks like the AI Explain function’s been put on pause. But until then, it looks like trust in MDN is on thin ice.
Valve responds to claims it has banned AI-generated games from Steam
Valve, the big kahuna of PC gaming distribution and the creator of Half-Life, was under fire recently for allegedly refusing to allow AI-generated games on its platform, Steam. One indie developer shared on a forum that Valve gave them the cold shoulder because their game used AI-generated content. Valve took issue due to the murky waters of who legally owns AI-created art. Basically, Valve won’t release games using AI-generated assets unless the developer can prove they own the rights to all the data used to train the AI that created the game assets.
A week after their first warning, Valve added more context saying it’s not certain if the AI tech used to make the assets owns enough rights to its training data. Considering most AI tools can’t clearly state they own the rights to their training data, this policy effectively slaps a ‘NO ENTRY’ sign on games with AI-generated content.
The use of AI in game development isn’t the issue here. Big league players like Ubisoft sing praises about AI assistance in game creation. The problem lies with generative AI that’s fed by the work of unpaid artists. If the creators can’t stake a claim on the copyright of their work, Valve sees it as too dicey to publish.
Valve told Eurogamer that their stance is more about following the letter of the law rather than having a beef with AI. They’re not trying to curb the use of AI on Steam, but rather figuring out how to work it into their review policies. Their process reflects current copyright laws, not their personal views. As these laws change, so will their process.
Valve is offering refunds of the usual no-backsies app submission fee in instances where this policy is a game-changer. Whether the use of AI is anything more than a little experimentation or an easy cash-grab remains to be seen. It’s too early to mourn the loss, but as more developers start using AI tools and these tools improve, it could become a serious bone of contention.
AI, Digital Twins to Unleash Next Wave of Climate Research Innovation
NVIDIA’s CEO, Jensen Huang, made a big splash at the Berlin Summit, talkin’ about using artificial intelligence (AI) and fast computing to take a huge leap forward in climate research. Basically, he outlined three “miracles” that we need to pull off:
First, we’ve got to simulate the climate quickly and clearly, down to a few square miles. Second, we gotta prep and handle a ton of data. Lastly, we need to make sense of all this data in a way that policymakers, businesses, and scientists can use easily. That’s where NVIDIA’s Omniverse comes in.
There’s also this new project called the Earth Virtualization Engines (EVE), where folks are pooling resources to make detailed climate information easily accessible. Huang says EVE and Earth-2, NVIDIA’s own project, are like two peas in a pod, both aiming to improve our understanding of climate and weather patterns.
The NVIDIA GH200 Grace Hopper Superchip and tools like NVIDIA Modulus and FourCastNet are game-changers. They’re basically gonna supercharge research and make it easier to understand complex stuff like how hurricanes move.
Lastly, Huang touched on digital twins, which are fancy computer models that recreate real-world systems. These models could help us predict climate and weather in places from Berlin to Tokyo.
Python Gets Its Mojo Working for AI
Python, the go-to language for AI, might be getting a serious pep talk from Mojo, a fresh-off-the-oven programming language. Mojo aims to mix Python’s friendliness with C’s speed, creating an optimal platform for AI developers. The big selling point? Mojo claims it can accelerate Python code up to a whopping 3,500 times faster.
Writing code in Python and leaning on languages like C for performance-focused sections is the standard practice, but it adds a layer of complexity. This becomes even more challenging with AI, as there isn’t a single programming language that works with every hardware system out there. Plus, deploying code on mobile and servers is another hurdle, with concerns like handling dependencies and performance.
Mojo, however, wants to simplify things. Its creators are striving to make it completely compatible with Python, while also developing it as a first-rate language, focusing on low-level performance and control. It should feel familiar to Python programmers and will include new tools for developing performant code that previously would’ve needed C or C++.
The brains behind Mojo is Chris Lattner, who previously developed the language Swift at Apple. He used an “intermediate representation” (a language for machines to read and write) through the LLVM Linux virtual machine. This made it possible for various software to enhance programming language functionality across different hardware.
The much-touted 3,500x performance improvement depends on the hardware. The example used to make this claim was run on an AWS machine. Even if the improvement isn’t as drastic for other machines, it’s still noteworthy.
Elon Musk blames AI scraping for Twitter problems
There’s been some funky business at Twitter Inc. lately, with all kinds of issues popping up. The big kahuna, Elon Musk, says the trouble is all due to artificial intelligence data scraping.
A few days back, Twitter folks saw some weird messages like “rate limit exceeded” and “cannot retrieve tweets.” And if you weren’t logged into Twitter, you couldn’t even read tweets. Instead, you got shuttled to a signup page.
Elon hops onto Twitter, saying the squeeze is because of companies going crazy, scraping data off Twitter for their AI needs. He wasn’t happy about having to hurry up and add more servers just to cater to these AI firms looking for a fast buck.
Elon’s plan to deal with this was to put limits on how many posts you can read per day. Verified accounts could read 6,000, unverified accounts 600, and new unverified accounts 300. But then he upped those numbers a couple times, now standing at 10,000, 1,000, and 500 respectively.
Of course, folks on Twitter weren’t exactly thrilled and let loose with hashtags like #TwitterDown, #ratelimitexceeded, and #WTFTwitter. By Sunday evening, though, these complaints weren’t so hot anymore, suggesting maybe things were getting a bit better, or folks were tired of griping.
AI Predicts CRISPR’s RNA-Targeting Effects, Revolutionizing Gene Therapy
Scientists whipped up a computer program named TIGER that can tell how this gene-cutting tool, CRISPR, will work on RNA (the middleman between DNA and protein). What’s nifty about TIGER is that it doesn’t just predict the bullseye hits but also the stray shots of RNA-targeting CRISPR. The result? We can fine-tune how much a gene is doing its thing in our cells.
TIGER’s got a lot going for it. By helping dodge those off-target hits, it might lead to better designed CRISPR treatments. Diseases caused by a gene being too loud or new ways to fight off viruses? TIGER’s got them covered.
So here’s the deal: normally CRISPR works by targeting DNA with a molecular knife called Cas9. But researchers discovered another kind of CRISPR that targets RNA with an enzyme called Cas13.
The new tech here is a machine learning model. It’s like teaching a computer to learn from a ton of data about CRISPR, and then predict what will happen when you let CRISPR loose on RNA. The goal is to maximize the hits on the RNA we want to target, and minimize the hits on RNA we don’t want to mess with.
One major thing to note: about one in five mutations are caused by insertions or deletions, so predicting off-target effects is big news. Before, folks were only looking at on-target hits and mismatches.
In the end, they put TIGER through its paces with about 200,000 guide RNAs (those are what guide the Cas13 to its RNA target). They found that TIGER could accurately predict both on-target and off-target activity.
Full-Body AI Scans Could Be the Future of Preventive Medicine
AI full-body scans are being pitched as the next big thing in preventive medicine, but folks, it’s not all sunshine and roses. Sure, companies like Prenuvo and Ezra are doling out these scans to healthy adults, promising to nab nasty health issues early. But, here’s the rub – the scans are quick to flag something fishy, often leading to unnecessary tests and a wild ride for nothing.
A scan will knock your wallet for a loop, running from $1,350 to $2,500. Despite the steep price, there’s no real proof these scans are gonna add years to your life or save your dough in the long run.
Now, these companies have their eyes on the future, betting big on AI to fine-tune their scans. Think of it as wiping away fog from your bathroom mirror after a hot shower. If the scans could be easy on the wallet and doctors could read ’em right, we might nab serious diseases sooner. But, it’s still a giant question mark.
Despite the hype, companies are struggling to make these scans a regular thing. They’re expensive, not covered by insurance, and the results can be as clear as mud. On top of that, the scans may not be as helpful as intended, considering they’re mostly used by people who can fork over the cash and may not represent the average Joe’s health.
Right now, doctors and bigwig medical societies are giving the side-eye to these full-body scans. They’re seen as more of a dream than a reality in preventive medicine. As much as it could change the healthcare game, there’s a long road ahead to get these scans affordable, widely accepted, and easy to understand. For now, it’s best to keep your shirt on about full-body scans. They’ve got a lot to prove.
Japan leaning toward softer AI rules than EU -source
Japan’s eyeballing a chiller approach to AI rules compared to the tough stance by the European Union. Why? They’re counting on AI to rev up the economy and solidify their rep in advanced tech. They’re looking to draw up a game plan by year’s end that’s more in line with Uncle Sam than the EU’s hard-nosed regulations.
This might throw a wrench in the EU’s plans to make their rules the gold standard worldwide. They want companies to spill the beans about the copyrighted stuff they use to train AI systems to make things like text and images.
EU bigwig Thierry Breton is in Tokyo to plug the EU’s way of doing things and work on chip collaboration. The Japanese official didn’t spill on how their rules will differ from the EU’s.
Prof. Yutaka Matsuo thinks the EU’s rules are too tight and says it’s pretty much impossible to lay out what copyrighted stuff is used in deep learning.
Japan’s hoping AI can help tackle their shrinking population and labor shortage. It might also drive up demand for advanced chips that a government-backed venture, Rapidus, is aiming to produce. The goal? Get Japan back on top in the tech world.
New York employers to start telling applicants when they encounter AI
New York City’s pushing the envelope, making employers tell folks when they’re using artificial intelligence (AI) to decide who gets the job. This is thanks to a law that started being enforced on July 5th. It’s the first law specifically about AI in hiring, but other places are likely to jump on the bandwagon.
This law is all about fairness. It says if a company uses AI to pick who to hire, they gotta tell candidates about it. Plus, they need to check every year to make sure the AI isn’t giving anyone the short end of the stick. This means they have to hire a third-party to do a “bias audit” of their software.
The goal here is to keep an eye on these AI tools. They might save companies time, but they’ve been called out for unfairly favoring some types of people over others.
But if you’re looking for a job and don’t like the idea of AI sizing you up, there’s not much you can do. The law says you can ask for an “alternative selection process,” but the company doesn’t have to agree.
If companies don’t play by these new rules, they’ll get hit with fines. The first one’s $500, and it goes up to $1,500 if they keep messing up.