Skip to content

SDK Inference

The OI ML Client SDK is coming soon! This official library will enable developers to interact with deployed models in the Open Innovation Platform. Stay tuned for updates on its release and documentation.


What to Expect

  • Unified API – Access text generation, sequence classification, classical ML, and more through a single, consistent interface.
  • Token-Based Security – Authenticate using secure tokens generated in the model’s settings.
  • Optimized Payloads – Send data in the right format automatically, reducing boilerplate code.
  • Extended Support – Leverage built-in methods for streaming responses, advanced model parameters, and custom chat templates.

Next Steps

  • Inference REST API – Use token-based APIs for direct model calls until the SDK is live.
  • Inference UI – Learn about the platform’s built-in user interface for manual model testing.
  • Model Version Configuration – Discover how to set up your model for easier integration with the future SDK.