Teradata supports your use of generative AI, large language models, and neural networks. In addition to NVIDIA AI Enterprise and NVIDIA AI Microservices, Teradata supports the following:
Compute and Networking Capabilities
- Storage architectures optimized for AI workloads
- Dedicated networking solutions for AI workloads
- Automated provisioning and scaling
- Compute instances recommendation for AI workload profiles
- Planning, optimization and FinOps tools
- Sustainability tools
- Specialized AI processors (for example, Google Tensor TPU, Amazon Trainium/Inferentia)
- InfiniBand
- Ethernet with DPU (data processing units)
GPUs
- NVIDIA H200
- NVIDIA H100
- NVIDIA A100-80
- NVIDIA A100-40
- NVIDIA A40
- NVIDIA A10
- NVIDIA T4
- NVIDIA L4
- NVIDIA L40
- NVIDIA L40S
- NVIDIA M60
- NVIDIA V100
- AMD MI250X
- AMD MI300X
- Externally accessible AI ASICs
- Intel Gaudi
- Qualcomm
CPUs
- AMD EPYC Rome
- Intel Xeon Ice Lake
- Intel Xeon 3
Tools, Frameworks, and Compute Engines
- Apache MXNet
- AUtoML frameworks (for example, AutoKeras, Auto-sklearn)
- BigDL
- Caffe
- Caffe2 (now part of PyTorch)
- Chainer
- cuBLAS
- cuDNN
- Darknet
- DL4J (Deeplearning 4J)
- Horovod (distributed deep learning
- JAX
- Keras
- LiteRT (new name for TFLite)
- MegEngine
- Microsoft Cognitive Toolkit (CNTK)
- MindSpore
- MLlib (Apache Spark Machine Learning Library
- .NET
- NVIDIA TensorRT
- OpenVINO
- PaddlePaddle
- PAX
- PlaidML
- PyTorch (includes Caffe2)
- Ray (distributed computing for AI/ML)
- TFLite (TensorFlo Lite, now called LiteRT)
- Theano
- Torch
- TPOT (tree-based pipeline optimization tool)
- XGBoost
- ONNX