TechRadar is on the ground in Taipei, Taiwan, for the biggest computing event of the year, Computex 2023, and we’ll be in the audience for the opening keynote address by Nvidia CEO Jensen Huang, which is scheduled for May 29, 2023, at 11:00 AM Taipei time (4AM BST), or May 28, 2023, 11PM EST.

The keynote comes off the massive Q1 profit Nvidia reported, powered by new AI hardware powering the latest large language models at Microsoft, Google, and OpenAI like ChatGPT, Stable Diffusion, and more.

We don’t have anything official about what Huang will say during the address to the throngs gathered in Taipei for the first real in-person Computex event since 2019, but you can be sure that there will be a lot of AI talk, as well as Nvidia Omniverse discussion.

Hot off the Nvidia RTX 4060 Ti 8GB launch, though, we wouldn’t be surprised at all to find that Nvidia announces a release date and pricing for the Nvidia RTX 4060 Ti 16GB and Nvidia RTX 4060, both of which Nvidia has said will launch in July.

Could we get other reveals at the event? Almost certainly, but we’ll have to see what transpires as Nvidia opens up Computex 2023.

Howdy folks, this is John Loeffler, TechRadar’s Components Editor, and I am in Taipei after a grueling 15-hour flight from New York City, but I’m pumped to be here and bring you all the latest from Nvidia’s keynote, as well as all the rest of Computex 2023.

I can’t say for certain what we’ll see in a couple of hours, but I expect that it will be exciting, at least as far as AI is concerned, considering how much Nvidia’s GPU hardware is integral to these latest AI advances.

Will we get graphics card talk? I’m almost certain that it will come up, but will we get prices and release dates for the RTX 4060 Ti 16GB and RTX 4060? I sure hope so, but we’ll know in a couple of hours from now once the event kicks off.

An Nvidia banner at Computex 2023

(Image credit: Future / John Loeffler)

I’m waiting to get into the Nvidia keynote event as we speak, which should be kicking off in the next 20 minutes. I’ll keep you posted on all the latest as everyone gets in and settled.

The stage at Nvidia's Computex 2023 keynote

(Image credit: Future / Mark Anthony Ramirez)

We’re all filing into our seats and waiting for the keynote to start, and the energy is great. It’s good to be back at Computex! We missed you Taipei! 

It looks like the event is about to start!

Oh yeah, this event is going to be very AI heavy from this intro.

Jensen Huang on stage at Computex 2023

(Image credit: Future / John Loeffler)

And here’s Jensen!

Jensen showing off the RTX 4060 Ti GPU and a 14-inch laptop running Cyberpunk 2077 with real-time RT.

An example of Nvidia Ace in a game

(Image credit: Future / John Loeffler)

Nvidia Ace, a real time AI model rendering tool. This looks like something for procedurally-generated missions and content.

Jensen says that the characters in this game aren’t prescripted, and they have a typical quest giver NPC type. The conversation was a bit stilted, but not too bad. Maybe Oblivion level dialogue. Not bad for an AI.

Jensen is talking about the IBM 360 from 1964, specifically the importance of the CPU. 

Jensen is talking about the end of Moore’s Law, and how generative AI is an answer to the end of Moore’s Law. 

Jensen says that it’s taken Nvidia to develop its full hardware stack. He says nvidia is introducing a new computing model built on Nvidia’s accelerated computing paradigm.

After the brief talk about ray tracing and whatnot, we’re getting to the meat of Nvidia’s new business model: data center GPU setups for large language models. This is the major driver of Nvidia’s profits last quater.

Jensen is showing off a datacenter GPU server that can run a large language model for less then 0.4 GWHr for $400K.

Nvidia is going to transition into a data center AI company as a primary function of its business, and it looks increasingly like graphics cards are going to be more of an afterthought.

It makes sense. Given how much money Nvidia is making off its AI side of the business, the market incentives to go all-in on AI will be irresistable.

Every single data center is overextended, Jensen says. I don’t doubt that one bit.

The Nvidia HGX H100 is in full production, Jensen says. To be clear, this is a data center GPU, so don’t expect it to run any of the best PC games on your rig unless you’re playing via the cloud.

LOL, the H100 costs $200,000. Talk about GPU price inflation.

Nvidia has been investing heavily in its AI infrastructure, and it’s making major advances every two years on AI supercomputers. These supercomputers will need dedicated programmers, comparable to computer “factories”. To be honest, this keynote is going over the head of probably 60% of the audience at the very least.

Jensen says that Nvidia improved graphics processing 1000x in 5 years using AI. I am really not sure where that number is coming from.

ChatGPT has entered the chat.

“Anything that has structure, we can learn that language,” Jensen says. “Once you can learn the language of certain information…we can now guide the AI to produce new information.”

WE can now apply computer science…to so many different fields that wasn’t possible before,” he says.

Jensen is showing off the power of prompts to generate new content, including a text to video demo with a very realistic woman speaking the words. He’s showed off a generative AI demo of a traditional Taiwanese song, and then gave the AI a prompt to create a sing along song. He just made the audience sing along.

1600 generative AI startups partnering with Nvidia. Yeah…Nvidia’s 5000-series graphics cards will probably be Nvidia’s last. Maybe the 6000-series, but shareholders will force Nvidia to go all in on generative AI and AI datacenters.

I agree with Jensen, we’re in a new computing era for sure.

A chart showing the AI hockey stick development

(Image credit: Future / John Loeffler)

Jensen busted out the S curve and hockey stick that anyone who has looked at AI should be very familiar with.

Grace Hopper is in full production. GH200 is capable of 65 billion parameters, which is absolutely obscene. Which has interlinked shared memory, so you don’t have to break data into pieces, which will help scale out AI.

Go to Source

Follow us on FacebookTwitter and InstagramWe are growing. Join our 6,000+ followers and us.

At TechRookies.com will strive to help turn Tech Rookies into Pros!

Want more articles click Here!

Deals on Homepage!

M1 Finance is a highly recommended brokerage start investing today here!

WeBull. LIMITED TIME OFFER: Get 3 free stocks valued up to $6300 by opening & funding a #Webull brokerage account! “>Get started >Thanks for visiting!

Subscribe to our newsletters. Here! On the homepage

Tech Rookies Music Here!

Disclaimer: I get commissions for purchases made through links in this post at no charge to you and thanks for supporting Tech Rookies.

Disclosure: Links contain affiliates. When you buy through one of our links we will receive a commission. This is at no cost to you. Thank you for supporting Teachrookies.com

Disclaimer: This article is for information purposes and should not be considered professional investment advice. It contains some forward-looking statements that should not be taken as indicators of future performance. Every investor has a different risk profile and goals. All investments have risks. Always do your own research or hire an expert before investing and trading.