-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Import client libraries, allow to make grpc calls to services #5
Conversation
Allows deposit, withdrawal, opening and funding channels
…ed base sdk and client config
Allow to open channels using client metadata for service
<client instance>.get_channel_state()
…d client from event logs
… call Updated README to reflect the change
I would propose to not upload |
Where could I get examples of utilizing this SDK for making call ? |
This works for me |
I made a repo called snet-code-examples which will feature server code and client code examples like the grpc repo (https://github.com/grpc/grpc/tree/master/examples) for various languages. Anyway, something like what you see in the README or something like this works: from snet_sdk import Snet
snet = Snet(private_key=<private key>)
translation = snet.client("snet", "translation")
stub = translation.grpc.translate_pb2_grpc.TranslationStub(translation.grpc_channel)
request = translation.grpc.translate_pb2.Request(text="Hello World.", source_language="en", target_language="de")
resp = stub.translate(request).translation
print(resp) SDK expects to find the compiled libraries in You can let the sdk automatically pick a channel or you can specify a |
@vforvalerio87
Why user should know it? I would argue that user should specify only org_id/service_id (and optionally channel_id in case of several "initialized " channels and group_name in case of multiply groups) For example in my simplified example of using snet-cli as a library (which I do not propose to use as SDK, but it is simple demonstration of how it could be done ) the call looks like this:
see As you can see user specify only |
@astroseger It's not possible to hide the protobuf details. The message types grpc supports are more complicated than a key-value request. The objects can have either/or logic, and be a nested message of different types. I think we should do the obvious thing, which is behave like grpc and protobuf. People know those. Maybe one day we can be more clever, but we haven't dealt with streaming data yet, and that will make it harder to know automatically what message type we need use for the streaming component and the order they should be presented. |
However I do agree it would be nice to replace Ideally:
|
Although, unfortunately, I realise that isn't easy, because if service spec has multiple proto files they may have the same message name in different files. So trying to squash them all into the same module will cause problems. I think the convenience methods that @astroseger mentions, or a common namespace (shown in my code example), are a nice idea, but we should still make it easy to use grpc interface as @vforvalerio87 has implemented. I think our sdk should do the correct thing first (work identically to grpc), and convenience can be added afterwards. |
@ferrouswheel Hmm... I do not understand... I simply propose to not repeat information which is already known to the system. We can get In @vforvalerio87 example user should know the name of protobuf file and the name of request class. But for example in snet-cli it looks like this:
we don't need the knowledge about the real name of protobuf file, or the name of request or response class. |
(@ferrouswheel And it works perfectly in case of the same request name in different protobuf files, because we fetch stub_class + request_class by method_name (+ service_name in case of conflict)) |
The face-services are probably the most complicated use of protobuf in singularitynet. It is much simpler at the moment because I had to remove all the streaming support, but it still has a shared common file across 4 services. I have to hack it because we force all service proto files in the same directory, and snet doesn't allow us to select a single service to be published, so I have shuffle my protofiles into temporary directories when I publish the service spec. While we can use json objects and convert them magically, in a compiled language it will be annoying. We will want to use the service's protobuf types to ensure the data we are providing to them is correct. There are also other checks I'm pretty sure we don't do. Can the dapp and snet-cli handle I have run into a fair number of problems due to assumptions about grpc and protobuf in our system, which is why I'd prefer us to just do the basic thing first, and then make sure any magic we add doesn't break things. I'm starting to think we need a benchmark of grpc patterns, to test any assumptions we make when trying to simplify things. When we support streaming it will be affected by a lot of these decisions. I think there is room to still have the simple/easy version of the sdk you propose @astroseger - I just don't think it is sufficient for the full set of services one can express in grpc. |
Ok, after porting one of my grpc clients to this sdk (or the simple alternative) there is one major design thing I'm concerned about. It looks like I need different code paths for singularitynet and for calling the service without the singularitynet layer. e.g. if I have a local service, and I'm testing it (without any blockchain stuff), I want to use almost the exact same code as when I'm calling the production service. Ideally I'd be able to fetch the model from either a local directory (yet to be published), or from snet. Then it'd be compiled and I could create a stub and request, but it'd only add the MPE stuff if I'm actually calling the service via MPE. Any idea if you've already allowed for this @vforvalerio87 - or if it'd be easy to support something like this?
If a directory and endpoint is provided, then no MPE work is done, nothing is looked up on the registry etc. Edit: updated to include what @astroseger's simple-sdk would have to look like. |
The above example code has the advantage that I could download another service's spec, and then use it locally while testing. I can implement a mock interface (that just returns dummy responses), and I can then test the logic of my application without a live connection to an ethereum network. Being able to test individual components or service is an important consideration for any microservice architecture, and SNet is essentially a giant microservice architecture for AI. Eventually it'd be nice for the SDK to do this automatically, or for it to help the developer to run their application in offline/test mode. |
Set default channel expiration to one week from minimum acceptable time for daemon
Return error if no usable channel is found
I pushed the latest changes. Now users can specify different private keys for ethereum transactions and for the mpe state channel signer. Technically you can also just specify a signer key. Channels are correctly filtered for both sender address and signer address. If user does not provide a dedicated signer private key, signer address defaults to sender address. I'm also working towards fixing a thing regarding using a mnemonic, which goes along another change regarding working with unlocked accounts (ex: ganache-cli). |
a06ebc7
to
3d7f68a
Compare
signer and for blockchain transactions (fund, deposit, extend, etc)
3d7f68a
to
b9ae8db
Compare
Sketch transaction logic to fund/extend channels automatically
create a channel if none exists if no funded and non-expired channel exists fund and extend any channel if non-expired channels exist but none is funded fund a non-expired channel if funded channels exist but they are all expired extend a funded channel
New changes:
(provided "allow_transactions" is passed as a configuration parameter) |
@tiero @astroseger @ferrouswheel please review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add a requirements.txt so it's easier to uninstall all dependencies from global scope?
pip uninstall -r requirements.txt
I had a clash with another python project. (I do not use virtual env)
We would have to keep dependencies both in setup.py and requirements.txt then. Not ideal. |
Yep, setup.py is the place where we add dependencies and that's fine. But how you manage to uninstall all dependencies in setup.py? What we can do is the txt file being a slave of setup.py (pip freeze or python setup.py install --record text.txt) Not blocking at all, but would be handy to understand how to clean the global dependencies. |
Few comments:
And I have a design proposition how to speedup SDK and simplify unification of SDK and snet-cli in the future (I will make it in the separate post). |
I'm sorry I forgot two issues (before I make my designe propostion): |
Thanks for the feedback, I'll address each point
|
design proposition. In snet-cli we cache the following information.
I underline that we only cache this information. At each call we actually verify that metadataURI hasn't been changed in Registry and if we detect that metadataURI has changed we reinitialize service (download metadata and recompile protobuf file). This approach has the following advantages
|
@vforvalerio87 We should dynamically cache everything (compiled protobuf, service registration, service metadata, channels ( result of _get_channels)). It is not logical to only (statically) cache compiled protobufs. It would be actually more logical to do everything dynamically, so simply recompile protobuf at each run (but I don't think it is a good idea because caching is the solution...) |
About caching, I opened a dedicated issue because I think the SDK has special requirements. Let me explain: while for the CLI it might be perfect to store everything locally in the file system (example, using pickle), for the SDK I think it would be better to have a pluggable caching mechanism, where you either specify a default method of storing data locally (by passing a string, example: "pickle", or whatever) which is known by the SDK, or you pass a class which exposes predefined methods to serialize/deserialize the data to be cached (example: a class with the "save" and "load" methods). There are mainly two reasons for this:
So basically, I would like to keep the SDK completely stateless by default (or more precisely, it shouldn't mandate the way that data is persisted) with the option of caching data however you prefer. This is mentioned in #7 |
@vforvalerio87 |
@vforvalerio87 I'm talking about dynamical caching. snet-cli will perfectly work in stateless case (ok you need to provide configuration). You can erase all cache at each run and it will perfectly work... |
Oh ok I see what you mean. I would still leave the option to provide the statically compiled _pb2 files, but the application would also download/compile them for you if you don't have them for the specified <org_id>/<service_id>. This is just for python and js by the way, not for the sdks for compiled languages of course. The reason why I would keep the option to provide the statically compiled libraries beforehand is:
(Actually by the way with the sdk you can specify a channel_id in the client instance, at which point it won't look for a channel to use again) |
Thanks @astroseger @vforvalerio87. I see that there are good suggestions which we can pick up in our subsequent iterations. From a first version perspective I see that we do not have any blockers as such. The discussion above will generate a bunch of issues that we will pick and resolve in the upcoming days. For now I am inclined to merge this. |
Yes, for sure, we could provide option for statically compiled languages. It might be more convenient for someone who writes the client for his own service (but even in this case I would still recommend to use dynamically compiled (and cached) protobuf, because it will automatically update everything in case of protobuf update in Registry). So again my point is following: I think that a way in which we deal with caching protobuf/metadata/channels in snet-cli can be and should be reused in SDK. And it will go with idea of @ferrouswheel for having everything, for all components in I can easily isolate the functions which we use in snet-cli to separate library which will be the first part of the common staff between SDK and snet-cli:
... So I see comment from @raamb . I agree that we could discuss it in separate issue... |
Merging this in now and will create individual issues for the points raised |
Implemented receiving metadata and proto files from the lighthouse.
Allows importing of client libraries by [org_id, service_id], dynamically loads all compiled modules for a service
Basic methods for programmatic interaction with MPE (deposit, withdraw, open channel, deposit and open, extend, add funds, extend and add funds)
When a client instance is created, it automatically gets the list of service endpoints and the list of open channels and it selects an open and funded channel for the client to use to make grpc calls to the service.
Automatically computes metadata for the next service call and adds the metadata to each service call transparently.