While we would love to be in a situation where we can train our own LLM from scratch, we understand the challenges in that endeavors.

Hence, we will instead focus on fine-tuning open source models that can help us achieve our aim of decentralization while still getting access to world-class models.

We will work with leading open source models, including, but not limited to:

  • Mistral 7B

  • Lllama 2

  • SDXL and SD-Turbo

We will use cloud providers to fullfill some of our capabilities initially as we fine-tune the models and collect user feedback. However, we aim to eventually use decentralized providers for deploying and using the models, such as BitTensor.

Last updated