Kur pirkti

ASBIS tiekia didelį asortimentą IT produktų savo klientams Lietuvoje. Aplankykite skilti kur pirkti ir sužinokite kur yra artimiausia parduotuvė

ASBIS naujienos

Balandis 29, 2024
Intel has validated its AI product portfolio for the first Meta Llama 3 8B and ...
Balandis 05, 2024
Ubiquiti Inc. has announced an update to its UniFi Protect application and the ...
Kovas 18, 2024
Intel® Ethernet 800 Series Network Adapters support speeds up to 100Gbps and ...
Kovas 12, 2024
AI Leaders and Top Companies From Healthcare, Transportation, Financial ...
Kovas 11, 2024
At MWC 2024 in Barcelona, Intel announced its future Intel® Xeon® processor ...
Kovas 08, 2024
Recognized for Marketing Excellence and Received TikTok for Business Overseas ...
Kovas 01, 2024
Solidigm D5-P5336: Massive Capacity, Minimal Cost
Vasaris 27, 2024
Dell Technologies introduces new edition of RISE Partner Program. The new ...
Vasaris 26, 2024
From 18 to 21 March ASBIS together with AMD, innovator of high-performance ...
Vasaris 23, 2024
Edgecore Networks, a leader in open networking solutions, has introduced the ...
Intel Xeon, Core™ Ultra and AI PC Accelerate GenAI Workloads

Balandis 29, 2024


Intel Xeon, Core™ Ultra and AI PC Accelerate GenAI Workloads

Intel has validated its AI product portfolio for the first Meta Llama 3 8B and 70B models across Intel® Gaudi® accelerators, Intel® Xeon® processors, Intel® Core™ Ultra processors and Intel® Arc™ graphics.

As part of its mission to bring AI everywhere, Intel invests in the software and AI ecosystem to ensure that its products are ready for the latest innovations in the dynamic AI space. In the data center, Intel Gaudi and Intel Xeon processors with Intel® Advanced Matrix Extension (Intel® AMX) acceleration give customers options to meet dynamic and wide-ranging requirements.

Ιntel Core Ultra processors and Intel Arc graphics products provide both a local development vehicle and deployment across millions of devices with support for comprehensive software frameworks and tools, including PyTorch and Intel® Extension for PyTorch® used for local research and development and OpenVINO™ toolkit for model development and inference.

Intel’s initial testing and performance results for Llama 3 8B and 70B models use open source software, including PyTorch, DeepSpeed, Intel Optimum Habana library and Intel Extension for PyTorch to provide the latest software optimizations.

Intel Xeon processors address demanding end-to-end AI workloads, and Intel invests in optimizing LLM results to reduce latency. Intel® Xeon® 6 processors with Performance-cores (code-named Granite Rapids) show a 2x improvement on Llama 3 8B inference latency compared with 4th Gen Intel® Xeon® processors and the ability to run larger language models, like Llama 3 70B, under 100ms per generated token.

Intel Core Ultra and Intel Arc Graphics deliver impressive performance for Llama 3. In an initial round of testing, Intel Core Ultra processors already generate faster than typical human reading speeds. Further, the Intel® Arc™ A770 GPU has Xe Matrix eXtensions (XMX) AI acceleration and 16GB of dedicated memory to provide exceptional performance for LLM workloads.   

Disclaimer:: The information contained in each press release posted on this site was factually accurate on the date it was issued. While these press releases and other materials remain on the Company's website, the Company assumes no duty to update the information to reflect subsequent developments. Consequently, readers of the press releases and other materials should not rely upon the information as current or accurate after their issuance dates.