"inference" Posts in "Artificial Intelligence (AI)"

cancel
Showing results for 
Search instead for 
Did you mean: 
502 Discussions
Latest Tagged

Running Falcon Inference on a CPU with Hugging Face Pipelines

Learn how to run inference with 7-billion and 40-billion Falcon on a 4th Gen Xeon CPU with Hugging F...
0 Kudos
0 Comments

“AI Everywhere” is Connected by Ethernet Everywhere

As we realize AI is Everywhere, how does the data move “everywhere?” The answer is Ethernet.
1 Kudos
0 Comments

Intel® Xeon® Processors Are Still the Only CPU With MLPerf Results, Raising the Bar By 5x

4th Gen Xeon processors deliver remarkable gen-on-gen gains across all MLPerf workloads
1 Kudos
1 Comments

Microsoft Azure Cognitive Service Containers on-premises with Intel Xeon Platform

Microsoft Azure Cognitive Services based AI Inferencing on-premises with Intel Xeon Platforms
0 Kudos
1 Comments

End-to-End Azure Machine Learning on-premises with Intel Xeon Platforms

Azure Machine Learning on-premises with Intel Xeon Platforms and Kubernetes
0 Kudos
0 Comments

Optimize Inference with Intel CPU Technology

Enjoy improved inferencing lower overall total cost of ownership (TCO) across an integrated AI platf...
0 Kudos
0 Comments

Deploy AI Inference with OpenVINO™ and Kubernetes

In this blog, you will learn how to use key features of the OpenVINO™ Operator for Kubernetes
0 Kudos
0 Comments

Simplified Deployments with OpenVINO™ Model Server and TensorFlow Serving

Learn how to perform inference on JPEG images using the gRPC API in OpenVINO Model Server.
1 Kudos
0 Comments

Load Balancing OpenVINO™ Model Server Deployments with Red Hat OpenShift

Learn how to deploy AI inference-as-a-service and scale to hundreds or thousands of nodes using Open...
0 Kudos
0 Comments