Closed
Description
Since now we have federation support #2915 and #2343 it makes sense to build a place under the LocalAI website to list and visualize community pools.
By community pools, I'm refering to a way for people to share swarms token, so they can both provide hardware capabilities, and use the federation for inference (like, Petals, but with more "shards").
The idea is to have an "explorer" or a "dashboard" which shows a list of active pools and with how many federated instances or llama.cpp workers, reporting their capability and availability.
Things to notice:
- In the dashboard we should list only active pools and delete pools that are offline or have 0 workers/federated instances
- Users can add arbitrarly tokens/pools, these gets scanned periodically and the dashboard reports its status
- We need to explicitly mention that this is without any warranty and contribute/use it at your own risk - we don't have any responsability of the usage you do and if malicious actors tries to fiddle with your systems. We are going to tackle bugs of course as a community, but users should be very well aware of the fact that this is experimental and might be unsecure to deploy on your hardware (unless you take all the precautions).
This would allow the users to:
- setup a cluster, and dedicate that for a specific community
- share the compute resources with others
- run inferencing if you don't have beefy hardware, but it is instead given by other community peers
Activity