Get a first look at NVIDIA’s groundbreaking DGX GH200 AI Supercomputer, a technological marvel set to redefine computational capabilities and power the AI initiatives of tomorrow.


NVIDIA Brings New Generative AI Capabilities, Groundbreaking Performance to 100 Million Windows RTX PCs and Workstations

NVIDIA’s RTX PCs are getting smarter than a fox in a henhouse, thanks to the new generative AI capabilities. That’s a fancy way of saying these computers can create original content based on patterns they see in existing data. Imagine a machine learning how to draw a chicken by looking at a million pictures of chickens. Only it’s doing more than drawing chickens.

We’re talking about programs like NVIDIA NeMo and DLSS 3, and a whole lot more. When you let ’em run on NVIDIA’s RTX GPUs (that’s the computer’s muscle for graphics), they go like a bat out of hell – up to five times faster than the competition.

What makes this possible, you ask? Two things: Tensor Cores, which are like supercharged engines just for AI, and software improvements that come out regularly. Plus, RTX GPUs are going green, using less power when they can and only turning up the juice when they really need to.

Developers can now use a whole suite of RTX-accelerated tools on Windows 11 to create new AI applications. And with the help of big cloud service providers, they can make sure these applications run smoother than a gravy sandwich.

“Our RTX PCs are like a Swiss Army knife for AI,” says Pavan Davuluri from Microsoft. “We’re making it as easy as pie for developers to deploy AI apps that are faster than a greased pig.”

And boy, are developers cooking up a storm! Over 400 AI-accelerated apps and games have already been released. NVIDIA’s CEO, Jensen Huang, even unveiled a new AI to help game developers make non-playable characters smarter.

Folks can now experience this generative AI magic on the go, with RTX laptops and mobile workstations as small as 14 inches and as light as three pounds. Top-drawer companies like Dell, HP, Lenovo and ASUS are hopping on this bandwagon, building machines that are ready to ride the generative AI wave.

Soon, these machines will be able to balance performance and power, kind of like juggling while riding a unicycle, to make sure they’re running as efficiently as possible. Developers, it’s time to saddle up and get your applications ready for this wild AI ride!


NVIDIA ACE for Games Sparks Life Into Virtual Characters With Generative AI

The smart folks over at NVIDIA dropped a bombshell today, and it’s all about making video game characters smarter than a pack of coonhounds. Here’s the scoop in a nutshell.

NVIDIA announced this thing called the NVIDIA Avatar Cloud Engine (ACE) for Games. It’s a new tool that can make game characters – you know, those folks you can’t play as – smarter through AI. Essentially, it makes ’em as chatty as a barfly after a six-pack.

These game-making folks can use ACE to make characters talk, act, and even look a bit smarter. Picture this: Instead of a grumpy tavern owner just grunting at you, he’s now yakking away, full of stories and sass. All thanks to this thing called “generative AI.” Sounds fancy, huh?

Now, it’s not just about flapping gums, mind you. NVIDIA has built this ACE thing on top of their Omniverse– that’s a fancy tech platform of theirs. They’ve got a few tools to play around with here. One’s called NeMo, which is all about language and talking. Another is Riva, which can recognize and generate speech, and the last one is Omniverse Audio2Face that matches character facial expressions to their gab.

Now, here’s where it gets interesting. NVIDIA teamed up with this startup called Convai to show off their new tech. They’ve got this demo, called Kairos, where players chat with this ramen shop owner named Jin. Now, Jin ain’t your usual NPC. He’ll gab your ear off, replying like a real person and fitting the game’s story. It’s like having a chat with a buddy over a bowl of noodles.

To wrap this all up, developers can use these AI models however they like – whether that’s on their own computer or up in the cloud. The point is, it’s all about making games more engaging – like diving into a page-turner instead of a dry textbook. Already, game developers are putting this tech to use, creating games that feel more like living, breathing worlds.

All in all, NVIDIA’s ACE is about as revolutionary as sliced bread in the gaming world. Get ready, y’all, because video games are about to get a lot chattier – and smarter, to boot.


MediaTek Partners With NVIDIA to Transform Automobiles With AI and Accelerated Computing

MediaTek and NVIDIA, two big shots in the tech game, are joining forces, as announced in a recent press conference. They’re up to transforming cars into “always-connected” smart vehicles with the power of AI and computing. Picture this: your ordinary runabout turned into a high-tech command center. No need for a science degree to understand that!

In plain English, MediaTek’s gonna make some fancy chips for cars. These chips, known as systems-on-chips (SoCs), will be integrated with an NVIDIA GPU chiplet (a tiny, super-powerful piece of computing hardware). The result? Cars with next-level infotainment systems, safety functions, and connected services – from your basic jalopy to top-tier luxury sedans.

NVIDIA isn’t just known for making your video games look better, they also have their claws in the robotics and auto industries. By bringing their GPU magic into the mix, they’re planning to jazz up the car industry even more.

MediaTek’s also gonna use some software tech from NVIDIA to run these new auto SoCs. It’s kinda like putting the brain (software) into the body (hardware) of a robot, but in this case, the robot is your car.

All this hoopla basically means more in-vehicle entertainment options for automakers, and by extension, us, the consumers. It’s like the difference between the Model T Ford and a Tesla.

MediaTek has got a bit of a track record with high-speed connectivity and entertainment, which they’re gonna use to boost the capabilities of their own Auto platform. The market for these types of SoCs is projected to hit a whopping $12 billion in 2023.

To break it down, we’re looking at a future where you can chill in your car with a level of convenience, safety, and tech-awesomeness that’ll make the Jetsons green with envy. Who said you can’t teach an old car new tricks?


NVIDIA Announces DGX GH200 AI Supercomputer

NVIDIA’s just put the pedal to the metal with their new DGX GH200 AI supercomputer. This big kahuna is here to power stuff like AI, recommender systems, and data processing.

Think of it as a huge digital brain built with 256 Grace Hopper Superchips. Together, these chips work as one mega-GPU – that’s like a huge graphics card. It’s so good, it can hit 1 exaflop of performance and store 144 terabytes of data. That’s enough room to hold every episode of every TV show ever made, and then some!

Jensen Huang, the head honcho at NVIDIA, is pretty chuffed about the whole thing. He says these supercomputers are the “digital engines of the modern economy” and they’re gonna “expand the frontier of AI.”

So, what’s the big deal? Well, these superchips are like a muscle car engine. Instead of using an old-school connection between the CPU and the GPU, they’re in the same package, making things way faster and energy-efficient. It’s kinda like trading your rusty old pickup for a slick sports car.

Big tech giants like Google Cloud, Meta (you know, the company formerly known as Facebook), and Microsoft are chomping at the bit to try out the DGX GH200. NVIDIA is also sharing the blueprint with other companies, so they can tweak it to fit their needs.

Now, training these AI models usually takes as long as a mule ride up a mountain, but this new supercomputer is expected to speed things up. As Girish Bablani from Microsoft put it, the DGX GH200 working with terabyte-sized datasets will allow developers to do advanced research faster and on a larger scale.

And in a move that screams, “We drink our own champagne,” NVIDIA’s building their own DGX GH200-based supercomputer named NVIDIA Helios. They’re planning to use it for their own research and it should be up and running by year’s end.

In short, the DGX GH200 supercomputer is a genuine hoot and holler moment in AI tech. And it’s expected to hit the streets by the end of the year. Now ain’t that a peach?


WPP Partners With NVIDIA to Build Generative AI-Enabled Content Engine for Digital Advertising

WPP and NVIDIA are cooking up something big, and it’s gonna change how ads get made. They’re whipping up a so-called “content engine” that uses some pretty fancy tech from NVIDIA, designed to make ad creation faster and easier. Think of it like an assembly line for ads.

So, how’s it work? This engine connects all sorts of tools for designing, creating, and managing content. The key players here include some big names like Adobe and Getty Images. This means WPP’s creative wizards can sprinkle a bit of their magic, mixing 3D design with what’s called “generative AI” to produce ads that are not only super personalized but also stay true to a company’s brand.

Now, let’s break down this generative AI mumbo-jumbo. It’s a kind of artificial intelligence that can whip up new content from scratch. Imagine telling a robot to draw a picture of a sunset, and it goes ahead and does it — that’s generative AI for ya.

The NVIDIA big cheese, Jensen Huang, gave us a sneak peek during a speech at COMPUTEX. His pitch? This tech can help businesses create a ton of high-quality ads, like pictures or videos, as well as cool 3D experiences that’ll knock your socks off.

And the CEO of WPP, Mark Read, ain’t shy about his ambitions either. According to him, this tech is gonna turn the world of marketing on its head and give WPP a leg up on the competition.

In a nutshell, it’s a souped-up, automated ad-making machine. This tech will make creating ads quicker than a New York minute and more efficient than a Swiss watch. Sounds like a game-changer, don’t it?

So, if you’re in the market for some snazzy new ads and you’re a WPP client, hold onto your hats, folks. This tech will be hitting the scene faster than a jackrabbit on a hot date.


World’s Leading Electronics Manufacturers Adopt NVIDIA Generative AI and Omniverse to Digitalize State-of-the-Art Factories

NVIDIA, the big dog in computer graphics, has become a hot ticket item for the world’s top electronics producers. We’re talking big names like Foxconn Industrial Internet, Innodisk, Pegatron, Quanta, Wistron, and more. What’s the catch? They’re all harnessing NVIDIA’s advanced tech to amp up their factories – basically turning them into futuristic playgrounds for robots.

In plain English, NVIDIA is pitching in with some serious tech goodies. We’ve got Omniverse, which is a big digital sandbox that lets the suits play around with designs, artificial intelligence (AI), and so forth. Then there’s Isaac Sim, a fancy toy that lets folks tinker with robots before they’re even built. Metropolis is another one, this time helping with automated inspections.

Why should you care? Well, as the CEO of NVIDIA, Jensen Huang, puts it, building stuff digitally before making it in the real world can save a boatload of money. And let’s face it, who doesn’t like a fat wallet?

Now, each of these major electronics players is using NVIDIA’s tech in its own special way. For example, Foxconn is aiming to automate big chunks of its quality checks, while Pegatron is digitizing its whole factory setup to boost workflows and cut costs. Wistron, on the other hand, is creating digital twins of its operations and assembly lines, which is basically like creating a mirror image in the digital world – sounds like sci-fi, but it’s real!

In the end, it all comes down to this – NVIDIA’s technology is the new secret sauce for these electronics giants, helping them streamline their processes, trim the fat, and get ahead in this cutthroat world. It’s a wild new era, folks. Buckle up!


Microsoft executive calls for faster AI regulation

Microsoft bigwig Brad Smith has a bone to pick. He’s all fired up on CBS’ “Face the Nation” this Sunday about how the U.S. government needs to step on the gas to regulate AI. He claims it’s the cat’s pajamas – more potential for our good than anything before. And, he’s not just talking calculators and Roombas here. We’re talking disease diagnosis, disaster management, and drug discovery.

Smith wants to clear the air, though. AI isn’t some hocus pocus, it’s everyday stuff. Ever seen your Roomba dodge a chair? Bingo, that’s AI.

Now, he’s hip to the concerns about AI’s growing power. But he likens it to any newfangled tech that got folks in a tizzy back in the day. His solution? Put some brakes on this runaway train, but don’t stop it entirely.

While our jobs might get tossed around like a hot potato, Smith assures it’s gonna be a slow burn, not an overnight catastrophe. We’ve got time to roll with the punches and pick up some new tricks, he says.

Concerned about that scary fake explosion pic near the Pentagon? Smith’s got a plan – a watermark system. That’s just fancy talk for a virtual “fingerprint” on images to catch any funny business. Gotta find a happy middle ground between stopping lies and protecting free speech, right?

Smith’s rallying cry for the tech sector: “Kumbaya with governments around the globe.” He’s pushing for a whole new government department to keep an eye on AI, making sure it’s safe and secure from hackers and other baddies.

As for the proposed six-month pause on super-powered AI by folks like Elon Musk and Steve Wozniak? Smith thinks that’s a load of hooey. He’d rather see us hit the pedal to the metal, not put the brakes on progress. He even suggests an executive order to ensure the government only buys AI services playing by the safety rules.

So, his final word? “The world is moving forward,” and Uncle Sam better be keeping up.


How the rise of generative AI could kill the metaverse — or save it

Let’s pull back the curtain on the real tech drama: the metaverse vs generative AI. It’s like the chicken and the egg, only with fewer feathers and more zeroes and ones.

Once upon a time, the metaverse was the belle of the ball, with tech moguls like Mark Zuckerberg swooning over its potential. But it seems that even Zuckerberg has had to rein in his enthusiasm, leaving many of us wondering if the metaverse was just a fancy VR pipe dream. Heck, Meta’s Reality Labs, the crew behind VR and the metaverse, chalked up a whopping $4.279 billion operating loss last quarter alone. It’s enough to make you want to unplug and live in the real world, right?

Now, the buzzword on everyone’s lips is generative AI (or GenAI, if you’re into the whole brevity thing). It’s the cool kid in town, and folks are hopping on the GenAI train faster than you can say ‘artificial intelligence’.

But here’s the twist in the plot, folks. While some are busy writing the metaverse’s eulogy, others see this nifty GenAI as a shot in the arm for the metaverse. With GenAI’s help, we could whip up new virtual objects, design custom avatars, and beef up cybersecurity – all without breaking a sweat.

But hold onto your hats, because it’s not all sunshine and rainbows. GenAI, while handy, could be a double-edged sword. The same AI that can bolster cybersecurity can also be manipulated by no-goodniks to create more sophisticated cyber-attacks. So, there’s the rub.

Now, does all this hullabaloo spell the end of the metaverse? Not quite. Some of the big guns, like Nike, J.P. Morgan, and Gucci, still see a goldmine in the metaverse, and they’re placing their bets accordingly. Companies are using the metaverse for everything from training to marketing and hosting events.

So, what’s the final word? The rise of generative AI isn’t the death knell for the metaverse. Rather, it’s like a spicy plot twist. When we combine the metaverse with GenAI, we might just be on the brink of a new tech revolution, one where the virtual and real worlds seamlessly blend, increasing efficiency and cutting costs.

And who knows? If we play our cards right, we could create a future that’s not only technologically advanced but also more socially interactive. After all, isn’t that what the metaverse is supposed to be about?


16 Jobs That Will Disappear in the Future Due to AI

So, you’re comfy in your job, huh? Think again. Our AI overlords are licking their digital chops, eyeing 16 roles they’re set to grab by the scruff and chuck out of the office window.

Seems we’ve got a bit of a terminator on our hands, with an AI called ‘Charlie’ handling 11,400 calls a day at a home repair service company. Terminator? More like talkinator, amiright?

Anyway, Goldman Sachs suggests automation might impact around 300 million full-time jobs. Guess the bots are ready to play office bingo too. But wait, is this all just hyped-up sci-fi scaremongering? Historically, machines have nudged us out of jobs, sure, but we’ve evolved and moved onto other things. Just look at the agriculture sector. In 1900, 41% of the US workforce was down on the farm. By 2000, it had dropped to 2%, thanks to machines. And, guess what? We didn’t starve, but thrived in new roles born from tech advancements.

Still not convinced? Well, ATMs popped up in the 1970s, and between 1995 and 2010 their numbers shot up from around 100,000 to 400,000. And human bank tellers? They increased from 500,000 to about 550,000 between 1980 and 2010. Why? Because banks realized tellers could do more than just handle cash.

Now, what jobs are under the AI guillotine? First up, entry-level programming, data analysis, and web development roles. Seems our new digital colleagues can whip up a website faster than you can say “JavaScript.”

Entry-level writing and proofreading roles are also on the hit list. Apparently, AI’s got a knack for basic writing and nitpicking grammar mistakes. Translation jobs might hit the skids too, as AI gets a better handle on languages.

Next, graphic design and fast food order taking jobs are under threat. Fast food joints are loving AI for order-taking, and apparently, AI could be making a pit stop at drive-thrus soon.

Basic accounting and bookkeeping, postal service clerical roles, and data entry jobs are also in the firing line. And despite having survived the ATM invasion, bank teller roles could face the music, followed by administrative support jobs and certain legal roles.

Bottom line: AI’s here to stay, folks. Either we learn to tango with them, or we might just end up in the robot apocalypse unemployment line.


The AI Boom Runs on Chips, but It Can’t Get Enough

AI’s new hotness, likened to mankind’s discovery of fire by Google’s bigwig, finds itself cooling its heels, lacking enough spark plugs – the graphics chips – to keep its engine roaring. Nvidia, the ‘Daddy Warbucks’ of graphics chips, has been hard-pressed to keep up with the wild demand, triggered by the roaring success of chatbot, ChatGPT.

The chips are as scarce as hen’s teeth, prompting a rat race among tech players to secure this computational juice. It’s a jamboree that echoes the toilet paper pandemonium during the pandemic. This bottleneck has hamstrung cloud-service providers like Amazon and Microsoft from offering their AI developers enough server capacity to whip up increasingly complex AI models.

Even tech titans aren’t immune to this challenge. OpenAI CEO, Sam Altman, wishes for less love for ChatGPT, given the processor predicament. Meanwhile, Elon Musk’s comparison of chip acquisition to scoring drugs left no stone unturned.

However, Musk played his trump card, snapping up a hefty chunk of Oracle’s server space, leaving many startups high and dry. His secret sauce? Building his own OpenAI rival, X.AI.

Without access to a slew of advanced chips, large AI models plod along at the pace of a three-legged tortoise. Nvidia’s chips are all about multitasking, which is the name of the game in AI.

Scarcity has sparked innovation. Startups are on a treasure hunt for spare computing power, orchestrating bulk orders, making AI models more efficient, and even resorting to less popular cloud providers.

Nvidia’s AI chips, each costing a pretty penny, around $33,000, are flying off the shelves, and are expected to be in short supply until next year at the earliest. This has prompted some to hoard cloud capacity like doomsday preppers.

Securing these chips doesn’t guarantee immediate usage. Akin to waiting for a bus in the middle of nowhere, even after paying up, one could be cooling their heels for weeks.

This chips crunch has led to a blossoming secondary market, partly fueled by large crypto companies that stocked up during their boom but are now selling off due to a downturn in their market.

In the face of all this chaos, companies are finding ways to bob and weave around these limitations. But for now, it seems like the AI world might have to slow its roll until the chips can once again fall where they may.