
About NVIDIA Triton Inference Server
NVIDIA Triton Inference Server is an open-source software that simplifies AI model deployment and inference. It supports models from frameworks like TensorFlow, PyTorch, ONNX or TensorRT.Triton delivers optimized performance for real-time, batched, ensemble, and streaming workloads, and is part of NVIDIA AI Enterprise for accelerating AI development and deployment. It can run on NVIDIA GPUs, CPUs, across clouds, on edge, or in embedded environments.

Need expertise with NVIDIA Triton Inference Server?
Fabian Dechent has optimized inference performance for the most demanding AI workloads in production - let's solve yours.
Common challenges with NVIDIA Triton Inference Server
dida services
I. dida provides a production-ready microservice which serves the respective models on a Triton Inference Server (either on local infrastructure or cloud-computing resources)
II. dida supports to bring existing models into deployment-ready formats
III. dida demonstrates the capability and flexibility of this setup with regard to monitoring, versioning, and model management to enable the client to use the Triton Server to its full potential
How it works
-
1
Book a 30min introductory meeting with one of our Machine Learning Engineers and tell us about your current situation / problem.
-
2
Based on your situation and requirements, dida will propose a lean plan on how to support your team quickly and efficiently (i.e. 1-3 days of NVIDIA Triton Inference Server support for approx. 1-3K EUR).
-
3
Within the next three working days, an experienced Machine Learning Engineer with competences in NVIDIA Triton Inference Server will start working on your problem.
-
4
After successful completion: Decide to expand the service, book a certain capacity per month or continue on your own.
Flexible support services for development teams
-
are planning on working with the NVIDIA Triton Inference Server and want dida to do the initial setup and integration into their projects, or
-
are already working with the NVIDIA Triton Inference Server and desire consultation, development support or evaluations of already developed code.
About dida


Frequently asked questions
-
Who will be working on our project?
Depending on the desired support volume dida will provide you with 1-2 experienced Machine Learning engineers that have the most experience with this specific tool / technology / framework.
-
Who will be our main point of contact?
The dida engineer who will lead the project will be the main point of contact so that respective engineers can directly communicate with each other.
-
How quickly can dida’s team help us address our current challenges with the NVIDIA Triton Inference Server?
After signing the contract, dida guarantees to start within the next three working days.
-
How will we communicate during the project?
We’re open to your preferred choice of communication and organization (Email, Slack, MS Teams, Gitlab / GIthub issues, …)
-
Does dida work remotely or on-site?
Most of dida’s work is typically performed from our office in Berlin, Germany. Nevertheless we regularly visit our clients for workshops, interim and final presentations or whenever the situation demands it. If on-site work is required, please let us know so that we can arrange it.
-
How often will we receive updates on the project?
For short term support services you will be updated either daily or after every milestone.
-
What qualifications and experience does your team have?
dida’s team comprises largely of scientists and engineers with backgrounds in mathematics and physics - many of them holding PhDs from prestigious institutions. The highly specialized team has 8 years of industry experience in implementing machine learning solutions. dida solves algorithmically complex problems and often tackles use cases where less specialized in-house departments previously failed. Amongst its clients are large European organizations such as Deutsche Bahn, Klöckner, Zeiss or APCOA.
-
Are there any subcontractors involved in the service delivery?
No, all purchased services will be provided by dida employees.