Перейти к содержимому

Running local models on Macs gets faster with Ollama's MLX support

Apple Silicon Macs get a performance boost thanks to better unified memory usage.

schedule 23:00 visibility 17 просмотров
Running local models on Macs gets faster with Ollama's MLX support
Источник: Ars Technica Читать оригинал →

auto_awesome Эта статья переведена автоматически и может содержать неточности.

Ollama, a runtime system for operating large language models on a local computer, has introduced support for Apple's open source MLX framework for machine learning. Additionally, Ollama says it has improved caching performance and now supports Nvidia's NVFP4 format for model compression, making for much more efficient memory usage in certain models.

Combined, these developments promise significantly improved performance on Macs with Apple Silicon chips (M1 or later)—and the timing couldn't be better, as local models are starting to gain steam in ways they haven't before outside researcher and hobbyist communities.

The recent runaway success of OpenClaw—which raced its way to over 300,000 stars on GitHub, made headlines with experiments like Moltbook and became an obsession in China in particular—has many people experimenting with running models on their machines.

Read full article

Comments

Похожие статьи

RFI: Украина отправила военных в Ливию для атак на "теневой флот" РФ
Технологии

RFI: Украина отправила военных в Ливию для атак на "теневой флот" РФ

Ливия предоставила ВСУ свои военные базы в обмен на обучение ливийских военнослужащих обращению с дронами, сообщило RFI. Авторы подтверждают, что газовоз Arctic Metagaz был атакован украинским беспилотником.

DW Russian