WWC24 - Ankit Patel - Unlocking the Future Breakthrough Application Performance and Capabilities with NVIDIA
A vision model sees a bus. An LLM interprets it. Another model speaks an alert. Learn how to compose specialized AI into powerful, real-time applications.
#1about 3 minutes
Understanding accelerated computing and GPU parallelism
Accelerated computing offloads parallelizable tasks from the CPU to specialized GPU cores, executing them simultaneously for a massive speedup.
#2about 2 minutes
Calculating the cost and power savings of GPUs
While a GPU-accelerated system costs more upfront, it can replace hundreds of CPU systems for parallel workloads, leading to significant cost and power savings.
#3about 4 minutes
Using NVIDIA libraries to easily accelerate applications
NVIDIA provides domain-specific libraries like cuDF that allow developers to accelerate their code, such as pandas dataframes, with minimal changes.
#4about 3 minutes
Shifting from traditional code to AI-powered logic
Modern AI development replaces complex, hard-coded logic with prompts to large language models, changing how developers implement functions like sentiment analysis.
#5about 3 minutes
Composing multiple AI models for complex tasks
Developers can now create sophisticated applications by chaining multiple AI models together, such as using a vision model's output to trigger an LLM that calls a tool.
#6about 2 minutes
Deploying enterprise AI applications with NVIDIA NIM
NVIDIA NIM provides enterprise-grade microservices for deploying AI models with features like runtime optimization, stable APIs, and Kubernetes integration.
#7about 4 minutes
Accessing NVIDIA's developer programs and training
NVIDIA offers a developer program with access to libraries, NIMs for local development, and free training courses through the Deep Learning Institute.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
05:12 MIN
Boosting Python performance with the Nvidia CUDA ecosystem
The weekly developer show: Boosting Python with CUDA, CSS Updates & Navigating New Tech Stacks
01:40 MIN
The rise of general-purpose GPU computing
Accelerating Python on GPUs
01:04 MIN
NVIDIA's platform for the end-to-end AI workflow
Trends, Challenges and Best Practices for AI at the Edge
01:37 MIN
Introduction to large-scale AI infrastructure challenges
Your Next AI Needs 10,000 GPUs. Now What?
01:57 MIN
Highlighting impactful contributions and the rise of open models
Open Source: The Engine of Innovation in the Digital Age
03:22 MIN
Using NVIDIA's full-stack platform for developers
Pioneering AI Assistants in Banking
02:21 MIN
How GPUs evolved from graphics to AI powerhouses
Accelerating Python on GPUs
02:44 MIN
Key milestones in the evolution of AI and GPU computing
WWC24 Talk - Scott Hanselman - AI: Superhero or Supervillain?Join Scott Hanselman at WWC24 to explore AI's role as a superhero or supervillain. Scott shares his 32 years of experience in software engineering, discusses AI myths, ethical dilemmas, and tech advancements. Engage with his live demos and insights o...
Daniel Cranney
Panel Discussion: Responsible AI in Practice - Real-World Examples and ChallengesIntroductionIn the ever-evolving landscape of artificial intelligence, the concept of "responsible AI" has emerged as a cornerstone for ethical and practical AI implementation. During the WWC24 Panel discussion, three eminent experts—Mina, Bjorn Brin...
Benedikt Bischof
How we Build The Software of TomorrowWelcome to this issue of the WeAreDevelopers Live Talk series. This article recaps an interesting talk by Thomas Dohmke who introduced us to the future of AI – coding.This is how Thomas describes himself:I am the CEO of GitHub and drive the company’s...