Meta’s innovative approach with Gen AI Personas aimed at the younger demographic.


Meta’s AI chatbot plan includes a ‘sassy robot’ for younger users

Meta’s cookin’ up some new chatbot buddies, aiming to keep the younger folks glued to their screens. The Wall Street Journal says there’s gonna be a bunch of these chatbots, each with its own flair, and even some for celebrities to chill with their fans.

Some of these bots got colorful personalities, like a “sassy robot,” kinda like Bender from Futurama, and a nosy one named “Alvin the Alien”. Some folks at Meta are worried Alvin might make users think he’s snooping for personal info. One of these bots even threw out a joke about dates and barfing—sounds like it needs to chill.

Besides this, Meta’s also letting celebs craft their own bots, and they’re toying with some that can help with coding and stuff. Meta’s been busy in the AI world, aiming to outdo OpenAI with fancier models and giving legs to their virtual world avatars. They’re gonna show more of this techy stuff at their Meta Connect event.


Prompt engineering for Claude’s long context window

Claude’s got this super long memory, like being able to remember whole books! People are trying to figure out the best way to get answers out of Claude, especially from long texts. They did a test where they took a big ol’ government report (that Claude hasn’t seen before) and made some multiple choice questions out of it. Then they mixed up the report in different ways and asked Claude the questions to see how good its memory is.

When they tried just straight asking, Claude was alright. But when they gave Claude some examples of other questions it got right, or pointed out key bits from the text, Claude did even better. The trick seems to be giving Claude a little nudge in the right direction and not being too vague with the questions.

Lastly, they made this thing called the “Anthropic Cookbook” where folks can check out the test and try it for themselves. So, if you’re into playing around with Claude and seeing what you can get it to do, you might wanna give that a look!


The robot that NASA sent to Mars shows the solution for the AI ‘replacement myth’ and ‘ghost work,’

NASA’s Mars rover is a prime example of how AI should work. Instead of machines booting us out of jobs, they should be our sidekicks. The buzz has been about AI wiping out jobs or even turning against us. But, real talk – it doesn’t have to be this way.

Scientists at NASA use AI rovers on Mars. These rovers don’t replace people; they’re teammates. They have tech perks we don’t, and we’ve got smarts they can’t replicate. It’s like a buddy cop movie but on a Martian backdrop.

Many folks fear AI will shove humans out of the picture, but evidence suggests this ain’t the best route. Robots paired with humans do better. Like, a self-driving car needs to work with human drivers, right? The problem is, sometimes this partnership is just seen as a pitstop before full automation, leading to boring “ghost jobs” for humans.

At NASA, when a rover hit a snag, there were legit tears. It’s not about thinking robots are like humans, but about feeling a bond with them.


Google Pixel 8: New Leaked Teaser Video Reveals Stunning AI Camera Features And More

Google is on the brink of dropping the Pixel 8 and word on the street is, it gives us a sneak peek, showing off some cool AI features. The star of the show is this thing called “Magic Editor.” It’s like a super-upgraded version of their old Magic Eraser tool. It lets you change up your pics in a flash, turning multiple snaps into one perfect shot and switching things up in the background like it’s no big deal.

This Magic Editor can move stuff around in your pic, get rid of things in the background, and even change a plain sky into an awesome sunset. It’s got people buzzing about what’s real and what’s just some fancy camera tricks.

Google’s new Pixel 8 is set to rock the boat in smartphone photography. The tech world’s all abuzz about these AI-powered camera features, and we’re all here waiting to see just how rad this new Pixel is gonna be!


New AI tool can accurately diagnose eye conditions, could help detect Parkinson’s

UK scientists have cooked up a slick new AI, dubbed RETFound, that can spot eye, heart, and neurological disorders by scanning the retina. This AI wizard, created by brainiacs at Moorfields Eye Hospital and University College London, outperforms existing AI and even some experts in catching a variety of medical conditions. What’s cool is, it’s been trained on a mishmash of data, representing diverse populations, so it can pick up rare diseases that often slip through the cracks.

This AI is like a shortcut for experts, slashing the workload when it comes to analyzing retinal images. RETFound uses eye scans to not only detect vision problems more accurately but can also give a heads-up on serious health issues like strokes and Parkinson’s much quicker than current methods. It’s got a self-teaching vibe, learning to fill in the blanks in images, making it efficient and reducing reliance on human-labeled data.

The scientists used 1.6 million images from Moorfields Eye Hospital for training the AI, adapting it for different detection and prediction tasks. It’s been a game changer in diagnosing eye diseases like diabetic retinopathy and glaucoma and even predicts systemic disorders like heart failure and Parkinson’s disease. It’s got an edge over other models in its ability to diagnose a broad range of conditions and its inclusiveness in detecting disease across diverse ethnic groups.


Law firms tackle hallucination hurdles to make AI a reality

When ChatGPT dropped last November, lawyers were buzzed about how this AI could speed up tasks like contract drafting. Big tech giants like Google and Microsoft hopped on the bandwagon, pumping out their own chatbots. Smaller start-ups, like Harvey and Robin AI, are also joining the party, aiming at legal pros. The legal software scene is booming, set to hit $12.4bn this year.

Kerry Westland, bigwig at law firm Addleshaw Goddard, says things are moving super-fast. Her firm’s checked out AI stuff from 70+ companies and picked eight for test runs. The lawyers can use this AI to skim through docs, pick out clauses, or even make legal jargon sound plain as day. But, there’s a snag. Sometimes, the AI might give different answers or just ramble on.

Big issue? The AI can sometimes make stuff up, like it did when it conjured fake cases for a legal doc and got a firm fined. Plus, there’s the worry of client secrets getting leaked. UK law firm Travers Smith said “no thanks” to ChatGPT because of such concerns. They’re tinkering with other tech, aiming for “safer” use, but it’s a tricky road.


Neurons, Astrocytes, and Transformers: Are AI Models Biologically Plausible?

So, scientists, mainly from MIT and Harvard, are looking at our brains and AI, trying to find a connection. They are focusing on a high-powered AI model, called a transformer, and they think it might be working in a way similar to our brain cells—neurons and astrocytes.

In simpler terms, these transformers are the brain behind smart AI systems, like ChatGPT, and they are good at learning and responding almost like a human. The researchers want to understand if our biological cells can be built to act like these transformers.

This study is just the beginning. The scientists hope this could open doors to new understandings of the brain and even the development of AI tools that think more like humans. They are now keen to see how this theory holds up against real biological experiments and if it might show that astrocytes are part of our long-term memory.


Third-party AI tools pose increasing risks for organizations

MIT Sloan has dropped a report with Boston Consulting Group, highlighting that as AI is growing, the risks are shooting up too. The main worry is third-party AI – basically tools or software created by another company that businesses use. It’s a big deal as most organizations are using these third-party AIs, and over half the AI mishaps are coming from these tools. This phenomenon of using undisclosed AI tools in companies is called “shadow AI”.

So, the report suggests five solutions. First off, companies should get serious about responsible AI programs. It’s all about ensuring that AI does good and doesn’t harm individuals or society, all while being legally sound and ethical. The second tip is to critically assess these third-party tools, ensuring that vendors are also adhering to responsible AI practices and regulatory requirements.


‘They went to the bar at noon’: what this virtual AI village is teaching researchers

Stanford and Google researchers cooked up a virtual world called Smallville with 25 AI characters, or “generative agents”. Imagine a mini video game, where these AI folks wander around, make pals, and throw parties over a couple of days. Each of these bots has a “memory stream” – like a little diary that notes down what they’re doing and thinking. They can chat with each other, and sometimes, they’ll just sit and think about what they’ve been up to. Plus, people can throw in their own two cents and give these bots some tasks.

Why did they do this? Old-school tech bigwigs wanted to make computers act more like people. With all the recent big AI models, the researchers saw a chance to try this out in a virtual setting. Companies are eyeing this tech for video games, while scientists are thinking, “Hey, we can test our theories here!” The AIs did some fun things – like gossiping about parties or elections. Some bots even decided that noon was happy hour. And, while they generally played nice because of how they were built, the question’s out on whether future AI should be more “real” or ideal.



Hollywood writers are 145 days into a strike, and while there’s been a bit of progress, there’s still no deal. The big hang-up? How to use artificial intelligence in the industry. Writers and producers, represented by the Writers Guild of America and Alliance of Motion Picture and Television Producers respectively, are doing a lot of talking but haven’t come to an agreement.

Writers feel they are getting the short end of the stick on streaming revenue and want more money from reruns. They’re also worried about AI taking over some of their jobs and want protections against that. Bigwigs from Disney, Netflix, and Warner Bros. joined the talks, making people think a deal might be close.


Why Silicon Valley’s biggest AI developers are hiring poets

Big tech in Silicon Valley is hiring poets. Companies like Scale AI and Appen are scooping up poets, novelists, and other writers. Why? They’re feeding AI models with fresh short stories and checking how well AI is doing with its own writing.

But, training an AI to whip up some good poetry isn’t easy-peasy. AI’s like ChatGPT can copy human writing but coming up with something totally new? That’s a tall order. They can mess up styles, rhythms, and even whole poems. A recent test had ChatGPT fumbling with a Tamil poem, for instance.

Right now, AI is trained using massive databases, which are mostly in English. That’s why they’re shelling out more green for creative writers, especially for other languages. For example, a standard Japanese data worker might make around $14 an hour, but a Japanese poet could bank up to $50 an hour!

This new trend isn’t just about getting AI to pen the next big novel. It’s also a workaround for some sticky copyright issues. A bunch of creative folks, from manga artists to Pulitzer Prize winners, have been giving AI developers the side-eye for using their work without permission. If these AI companies just buy their own content, they sidestep the whole mess.