Apple has shared details on a collaboration with NVIDIA to greatly improve the performance of large language models (LLMs) by implementing a new text generation technique that offers substantial speed improvements for AI applications.
Apple earlier this year published and open-sourced Recurrent Drafter (ReDrafter), an approach that combines beam search and dynamic tree attention methods to accelerate text generation. Beam search explores multiple potential text seq uences at once for better results, while tree attention organizes and removes redundant overlaps among these sequences to improve efficiency.
Apple has now integrated the technology into NVIDIA's TensorRT-LLM framework, which optimizes LLMs running on NVIDIA GPUs, where it achieved "state of the art performance," according to Apple. The integration saw the technique manage a 2.7x speed increase in tokens generated per second during testing with a production model containing tens of billions of parameters.
Apple says the improved performance not only reduces user-perceived latency but also leads to decreased GPU usage and power consumption. From Apple's Machine Learning Research blog:
"LLMs are increasingly being used to power production applications, and improving inference efficiency can both impact computational costs and reduce latency for users. With ReDrafter's novel approach to speculative decoding integrated into the NVIDIA TensorRT-LLM framework, developers can now benefit from faster token generation on NVIDIA GPUs for their production LLM applications."
Developers interested in implementing ReDrafter can find detailed information on both Apple's website and NVIDIA's developer blog.
0 Comments
Post a Comment