Something went wrong
An error occurred, please try again later.
Copy for customer care: session ID undefined
Copy URL
Maximizing Efficiency in Large Language Models: Compute, Memory, and Fine-Tuning
From Twin Karmakharm July 24, 2024
22 plays
22
0 comments
0
You unliked the media.
Related Media
Speaker: Karin Sevegnani, Senior Solutions Architect, Nvidia
…Read more
Less…
In this talk, we will explore the
intricate balance between computational resources, memory limitations,
and parameter-efficient fine-tuning techniques in large language models
(LLMs). We will analyse strategies to optimize the performance of LLMs
while managing these constraints effectively. From efficient memory
utilization to streamlined parameter fine-tuning methods, we will
discuss practical approaches to maximize the efficiency of LLMs without
sacrificing performance.
Presented at Best Practices in AI Afternoon event 2024-07-05
- Tags
- Appears In
Link to Media Page
Loading