Key Takeaways from NVIDIA’s GTC Conference Keynote
I recently attended NVIDIA’s GTC conference. Billed as the “number one AI conference for innovators, technologists, and creatives,” the keynote by NVIDIA’s always dynamic CEO, Jensen Huang, did not disappoint.
Over the course of his lively talk, Huang detailed how NVIDIA’s DGX line, which RCH has been selling and supporting since shortly after the inception of DGX, continues to mature as a full-blown AI enabler.
How? Scale, essentially.
More specifically, though, NVIDIA’s increasing lineup of available software and models will facilitate innovation by removing much of the software infrastructure work and providing frameworks and baselines on which to build.
In other words, one will not be stuck reinventing the wheel when implementing AI (a powerful and somewhat ironic analogy when you consider the impact of both technologies—the wheel and artificial intelligence—on human civilization).
The result, just as RCH promotes in Scientific Compute, is that the workstation, server, and cluster look the same to the users so that scaling is essentially seamless.
While cynics could see what they’re doing as a form of vendor lock, I’m looking at it as prosperity via an ecosystem. Similar to the way I, and millions of other people around the world, are vendor-locked into Apple because we enjoy the “Apple ecosystem”, NVIDIA’s vision will enable the company to transcend its role as simply an emerging technology provider (which to be clear, is no small feat in and of itself) to become a facilitator of a complete AI ecosystem. In such a situation, like Apple, the components are connected or work together seamlessly to create a next-level friction-free experience for the user.
From my perspective, the potential benefit of that outcome—particularly within drug research/early development where the barriers to optimizing AI are high—is enormous.
The Value of an AI Ecosystem in Drug Discovery
The Cliff’s Notes version of how NVIDIA plans to operationalize its vision (and my take on it), is this:
- Application Sharing: NVIDIA touted Omniverse as a collaborative platform — “universal” sharing of applications and 3D.
- Data Centralization: The software-defined data center (BlueField-2 & 3 / DPU) was also quite compelling, though in the world of R&D we live in at RCH, it’s really more about Science and Analytics than Infrastructure. Nonetheless, I think we have to acknowledge the potential here.
- Virtualization: GPU virtualization was also impressive (though like BlueField, this is not new but evolved). In my mind, I wrestle with virtualization for density when it comes to Scientific Compute, but we (collectively) need to put more thought into this.
- Processing: NVIDIA is pushing its own CPU as the final component in the mix, which is an ARM-based processor. ARM is clearly going to be a force moving forward, and Intel x86_64 is aging … but we also have to acknowledge that this will be an evolution and not a flash-cut.
What’s interesting is how this approach could play to enhance in-silico Science.
Our world is Cloud-first. Candidly, I’m a proponent of that for what I see as legitimate reasons (you can read more about that here). But like any business, Public Cloud vendors need to cater to a wide audience to better the chances of commercial success. While this philosophy leads to many beneficial services, it can also be a blocker for specialized/niche needs, like those in drug R&D.
To this end, Edge Computing (for those still catching up, a high-bandwidth and very low latency specialty compute strategy in which co-location centers are topologically close to the Cloud), is a solution.
Edge Computing is a powerful paradigm in Cloud Computing, enabling niche features and cost controls while maintaining a Cloud-first tact. Thus, teams are able to take advantage of the benefits of a Public Cloud for data storage, while augmenting what Public Cloud providers can offer by maintaining compute on the Edge. It’s a model that enables data to move faster than the more traditional scenario; and in NVIDIA’s equation, DGX and possibly BlueField work as the Edge of the Cloud.
More interestingly, though, is how this strategy could help Life Sciences companies dip their toes into the still unexplored waters of Quantum Computing through cuQuantum … Quantum (qubit) simulation on GPU … for early research and discovery.
I can’t yet say how well this works in application, but the idea that we could use a simulator to test Quantum Compute code, as well as train people in this discipline, has the potential to be downright disruptive. Talking to those in the Quantum Compute industry, there are anywhere from 10 – 35 people in the world who can code in this manner (today). I see this simulator as a more cost-effective way to explore technology, and even potentially grow into a development platform for more user-friendly OS-type services for Quantum.
A Solution for Reducing the Pain of Data Movement
In summary, what NVIDIA is proposing may simplify the path to a more synergistic computing paradigm by enabling teams to remain—or become—Cloud-first without sacrificing speed or performance.
Further, while the Public Cloud is fantastic, nothing is perfect. The Edge, enabled by innovations like what NVIDIA is introducing, could become a model that aims to offer the upside of On-prem for the niche while reducing the sometimes-maligned task of data movement.
While only time will tell for sure how well NVIDIA’s tools will solve Scientific Computing challenges such as these, I have a feeling that Jensen and his team—like our most ancient of ancestors who first carved stone into a circle—just may be on to something here.
RCH Solutions is a global provider of computational science expertise, helping Life Sciences and Healthcare firms of all sizes clear the path to discovery for nearly 30 years. If you’re interesting in learning how RCH can support your goals, get in touch with us here.