Skip to content

Machine Learning on local or Cloud based NVidia or Apple GPUs

Michael O'Brien edited this page Nov 1, 2023 · 14 revisions

Introduction

This blog details various configurations around running machine learning software towards LLM or general AI based applications on a variety of hardware including NVidia professional workstation GPUs locally or on the cloud or local Apple ARM hardware.

Quickstart

Setup

Architecture

DevOps

Example ML Systems

2023 Lenovo P1 Gen 6 : i7-13800H 64G and NVidia RTX-A3500 Ada AD-104 5120 cores 12G 192bit VRam

2019 Lenovo P17 Gen 1 : Xeon W-10855M 128G and NVidia Quadro RTX-5000 TU104 Turing 3072 cores 16G 256bit VRam

2023 Custom : i9-13900K 192G and Dual NVidia GTX-4090 MSI Suprim Liquid X

2023 Custom : i9-13900K 128G and Dual NVidia RTX-A4500 with NVidia RTX-4000

2021 Lenovo X1 Carbon gen 9 : Intel GPU

Google Cloud Workstation : NVidia L4 GPU

Google Pixel 6 : Google TPU

Links

Clone this wiki locally