How OpenAI is Pioneering Research to Curb the Risks of Superintelligent AI Going Rogue
OpenAI is forming a new team to bring ‘superintelligent’ AI under control
OpenAI, a leading name in the AI field, is buckling down to tame the wild horse of superintelligent AI. Ilya Sutskever, a top-dog scientist and co-founder at OpenAI, is gathering a new squad to figure out how to control these ultra-smart AI systems.
In a blog post, Sutskever and his colleague Jan Leike speculate that human-beating AI might pop up in the next ten years. Current methods, like using human feedback, assume we can keep a close eye on AI. But what happens when these systems get too smart for us to handle?
Enter OpenAI’s new Superalignment team. With 20% of the company’s computing power, a gang of brainy scientists and engineers, and a four-year deadline, their mission is to crack the code of controlling superintelligent AI.
Their game plan? Make an AI system that’s as smart as a human AI researcher. They want to train AI using human feedback, get AI to help evaluate other AI systems, and eventually build AI that can carry out alignment research – that’s making sure AI systems do what they’re supposed to and stay on the straight and narrow.
Sure, there’s no perfect plan. The OpenAI crew recognizes there are drawbacks. If AI systems take over evaluation, they might amplify any existing flaws or biases. Plus, the toughest parts of alignment may not even be about engineering. But, in Sutskever and Leike’s eyes, it’s worth a shot. They view superintelligence alignment as a machine learning challenge. OpenAI plans to share their findings with the broader community, helping ensure the safety of AI models beyond their own backyard.
ChatGPT Users Abused Web Browsing Feature So OpenAI Has Turned It Off
OpenAI’s just yanked the “Browse with Bing” feature from its chatbot, ChatGPT, saying it sometimes shows stuff they’d rather it didn’t. Some folks using the chatbot’s Plus subscription managed to ask it for full website text, even working around paywalls and privacy measures.
This decision ain’t sitting well with the folks who paid extra for ChatGPT Plus, for perks like using the beefed-up GPT-4, this now-gone browsing feature, and the GPT-4 Plugin Store. They’re all, “Hey, we paid for this stuff!” No word on when browsing will be back, but OpenAI says they’re scrambling to sort things out.
Back in February, Microsoft – who’s dropped a cool $13 billion into OpenAI – put ChatGPT into its Bing Browser. Couple months later, OpenAI rolled out GPT-4 and gave the chatbot browsing powers. Despite yanking “Browse with Bing”, you can still use some GPT-4 plugins to ask the bot to read websites and PDFs and answer your burning questions about that content.
OpenAI hasn’t spilled the beans on when we can expect the browsing feature back, but they’re saying thanks to the Plus subscribers for helping test the feature and that they’re working hard to bring it back.
Leaked Email Shows Adobe Banning Employee Use Of Personal Email Accounts And Corporate Credit Cards For Generative AI Apps
Adobe’s shaking things up, sending a “be smart” memo around to staff about using AI apps at work. The gist? Keep personal email and company plastic out of the game when signing up for these apps.
Their top IT honcho, Cindy Stoddard, highlighted the importance of being on the level with AI tech. Employees gotta respect the data, not sharing any sensitive info and keeping Adobe’s business safe.
The memo isn’t all “don’ts”, though. It’s about using AI in a clever way, like adding a bit of human magic to what the AI comes up with. Stoddard also announced a new team, AI@Adobe, to help folks make the most of AI tools.
She laid down some ground rules, differentiating between restricted, confidential, internal, and public data and how to use AI with each type. The bottom line? Use your noggin and don’t trust AI blindly.
Kinnu raises $6.5 million to use AI to flip the script when it comes to edtech
London’s hot new startup, Kinnu, has raked in another $6.5 million to flip the script on how we learn. They’ve got a slick AI system that puts learners front and center, rather than just feeding ’em what content makers want.
This new funding round was headed up by LocalGlobe and Cavalry Ventures, with some added juice from Spark Capital and Jigsaw, and other private investors. This comes after their first round last year which pulled in $2.4 million. Now, their total sits just a hair under $9 million.
Kinnu’s founders, Christopher Kahler and Abraham Müller, took a gamble in 2021 that their AI could shift the balance from “Content is King” to tech that helps folks actually hold on to what they learn. They use a special type of AI, what they jokingly call “next-level cephalopods,” to personalize the learning experience.
“Our aim is to empower folks to learn whatever they want to, whenever they want to,” says Kahler. These ain’t no rookies either. Kahler and Müller have already built and sold a business before, and during their “time-off,” they got to thinking about their next venture, leading them to Kinnu.
CADDi raises $89M Series C to scale its B2B supply chain marketplace for manufacturing parts
B2B marketplace startup CADDi has scored a whopping $89M in a Series C funding round, taking its total war chest to $164M. They’re using these bucks to beef up the biz and help manufacturing companies, which have been thrown a major curveball by the pandemic.
CADDi is expanding fast and has more than doubled its workforce to nearly 600. It’s also rolled out a shiny new platform, the AI-powered CADDi Drawer, to handle design data.
Founded by ex-McKinsey and Apple folks in 2017, CADDi’s bread and butter are two solutions: CADDi Manufacturing for buying parts and CADDi Drawer. Their platform helps streamline buying, brings down costs by about a fifth, and boosts on-time delivery and quality rates.
The rub is, their model makes more sense for manufacturers that start making products only after receiving confirmed orders. CADDi says their platform can help these producers get the best bang for their buck.
AI gold rush makes basic data security hygiene critical
Bigwig companies are gaga over artificial intelligence (AI), especially generative AI like OpenAI’s ChatGPT. CEOs aren’t snoozing on this tech – half are already using it in products and services, and many rely on it for strategic and operational decisions. But here’s the rub – businesses need to up their game in basic data security.
According to an IBM study, CEOs are wary of AI risks, including data security and accuracy. Over half of them are holding off on big investments due to inconsistent security standards. Only 55% of CEOs are confident about fully reporting on data security and privacy to stakeholders.
Terry Ray, SVP for data security at Imperva, underlines the need for basic security hygiene. Ray’s team monitors generative AI developments and vets employees’ use of AI applications like ChatGPT to make sure they’re on the up and up with company policies.
He notes that while generative AI is on the rise, it hasn’t changed how organizations are attacked. Old-school threats like unpatched systems are still the go-to for hackers. Ray also points out the need to better understand and secure APIs, which are often riddled with vulnerabilities and used extensively today.
Washington plans to block Chinese access to AI cloud services
Uncle Sam’s upping the ante, folks. Soon enough, Uncle Sam might need to say “Okay” before the likes of Amazon Web Services or Microsoft Azure can let their customers in China use cloud services for AI training.
Seems like the U.S. Department of Commerce’s got some new rules cooking. These new rules might put cloud services in the same hot seat as AI technologies, like those speedy GPU hardware that make training AI models a breeze.
The idea is to plug the gap where Chinese companies, stopped from buying high-end computing hardware, just rent cloud access instead. Chinese cloud market is pretty much in the hands of local big shots like Alibaba Cloud, Huawei Cloud, Tencent Cloud, and Baidu AI Cloud. AWS and Azure do have regions in China, but it’s local partners that run the show, not them.
AI expert, Roy Illsley, doesn’t think this’ll cause a big fuss. China’s spending big bucks on AI, so they can still train their AI models. But, Illsley admits, this might stop Chinese folks from copying U.S.-developed AI models.
Tech leaders at Collision Conference: ‘AI is our biggest opportunity’
At last week’s 2023 Collision Conference in Toronto, big tech honchos weighed in on the power and problems of artificial intelligence (AI). They all agreed – AI’s a game changer.
Google’s DeepMind exec, Colin Murdoch, says AI development is picking up the pace and could do some serious good. Think curing diseases and tackling climate change. It’s a bit like the industrial revolution, helping us skip over expensive and slow steps in scientific discovery.
IBM’s bigwig, Dr. Kareem Yusuf, is jazzed about AI boosting productivity – like improving human resources tasks by 40%. But he’s also got his eye on the ethical side of things. Knowing where your data comes from is key, and businesses can’t just make things up.
Over at Twitch, Tom Verrilli’s betting on AI to sort out online safety. His teams use AI to spot sketchy patterns and bots, helping to keep Twitch a safe space.
Siam Capital’s founder, Sita Chantramonklasri, sees a lot of potential where AI meets sustainability. She believes AI could make a huge difference in fighting climate change by filling in gaps in our data.
And Navrina Singh, CEO at Credo AI, argues that while there are risks with AI, we shouldn’t let fear distract us. The lessons we learn now will help us handle any future problems AI might throw at us.
In short, these tech titans see AI as an exciting, transformative tool. But they know it’s not all sunshine and rainbows – there are ethical issues to consider and a lot of work ahead.