mirror of
https://github.com/fluencelabs/gitbook-docs
synced 2024-12-04 15:20:24 +00:00
clean-salte main
This commit is contained in:
parent
8aa1cbf658
commit
e5a0050252
23
SUMMARY.md
23
SUMMARY.md
@ -1,23 +0,0 @@
|
||||
# Table of contents
|
||||
|
||||
* [Introduction](README.md)
|
||||
* [Thinking In Aquamarine](p2p.md)
|
||||
* [Concepts](concepts.md)
|
||||
* [Quick Start](quick-start.md)
|
||||
* [Aquamarine](knowledge_aquamarine/README.md)
|
||||
* [Aqua](knowledge_aquamarine/hll.md)
|
||||
* [Marine](knowledge_aquamarine/marine/README.md)
|
||||
* [Marine CLI](knowledge_aquamarine/marine/marine-cli.md)
|
||||
* [Marine Repl](knowledge_aquamarine/marine/marine-repl.md)
|
||||
* [Marine Rust SDK](knowledge_aquamarine/marine/marine-rs-sdk.md)
|
||||
* [Tools](knowledge_tools.md)
|
||||
* [Node](node.md)
|
||||
* [Security](knowledge_security.md)
|
||||
* [Tutorials](tutorials_tutorials/README.md)
|
||||
* [Setting Up Your Environment](tutorials_tutorials/recipes_setting_up.md)
|
||||
* [Deploy A Local Fluence Node](tutorials_tutorials/tutorial_run_local_node.md)
|
||||
* [cUrl As A Service](tutorials_tutorials/curl-as-a-service.md)
|
||||
* [Add Your Own Builtins](tutorials_tutorials/add-your-own-builtin.md)
|
||||
* [Building a Frontend with JS-SDK](tutorials_tutorials/building-a-frontend-with-js-sdk.md)
|
||||
* [Research, Papers And References](research-papers-and-references.md)
|
||||
|
@ -1,244 +0,0 @@
|
||||
---
|
||||
description: WIP -- Tread Carefully
|
||||
---
|
||||
|
||||
# Building A Frontend with JS SDK
|
||||
|
||||
The JS SDK provides the means to connect to a Fluence peer-to-peer network from a javascript environment and is currently available for _node.js_ and modern browsers.
|
||||
|
||||
To create an application two building blocks are needed: the JS SDK and the Aqua compiler. Both packages are available as _npm_ packages. The JS SDK wraps the AIR interpreter and provides a connection to the network. There is la ow-level api for executing AIR scripts and registering for service call handlers. The Aqua compiler allows to write code in Aqua language and compile it to typescript code which can be directly used with the JS SDK.
|
||||
|
||||
### Basic usage
|
||||
|
||||
To demonstrate the development of a client application, we initiate a bare-bones _node.js_ package and review the steps needed to install the JS SDK and Aqua compiler.
|
||||
|
||||
#### 1. Install The _npm_ Package
|
||||
|
||||
For the purpose of the demo we will use a very minimal _npm_ package with typescript support:
|
||||
|
||||
```text
|
||||
src
|
||||
┗ index.ts (1)
|
||||
package.json (2)
|
||||
tsconfig.json
|
||||
```
|
||||
|
||||
index.ts \(1\):
|
||||
|
||||
```typescript
|
||||
async function main() {
|
||||
console.log("Hello, world!");
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
|
||||
package.json \(2\):
|
||||
|
||||
```javascript
|
||||
{
|
||||
"name": "demo",
|
||||
"version": "1.0.0",
|
||||
"description": "",
|
||||
"main": "index.js",
|
||||
"scripts": {
|
||||
"exec": "node -r ts-node/register src/index.ts"
|
||||
},
|
||||
"author": "",
|
||||
"license": "ISC",
|
||||
"devDependencies": {
|
||||
"ts-node": "^9.1.1",
|
||||
"typescript": "^4.2.4"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Let's test if it works:
|
||||
|
||||
```bash
|
||||
$ npm install
|
||||
$ npm run exec
|
||||
```
|
||||
|
||||
The following should be printed
|
||||
|
||||
```bash
|
||||
$ npm run exec
|
||||
|
||||
> demo@1.0.0 exec C:\work\demo
|
||||
> node -r ts-node/register src/index.ts
|
||||
|
||||
Hello, world!
|
||||
$ C:\work\demo>
|
||||
```
|
||||
|
||||
#### 2. Setting JS SDK and connecting to Fluence network
|
||||
|
||||
Install the dependencies, you will need these two packages.
|
||||
|
||||
```bash
|
||||
npm install @fluencelabs/fluence @fluencelabs/fluence-network-environment
|
||||
```
|
||||
|
||||
The first one is the SDK itself and the second is a maintained list of Fluence networks and nodes to connect to.
|
||||
|
||||
All of the communication with the Fluence network is done by using `FluenceClient`. You can create one with `createClient` function. The client encapsulates air interpreter and connects to the network through the relay. Currently any node in the network can be used a relay.
|
||||
|
||||
```typescript
|
||||
import { createClient } from "@fluencelabs/fluence";
|
||||
import { testNet } from "@fluencelabs/fluence-network-environment";
|
||||
|
||||
async function main() {
|
||||
const client = await createClient(testNet[1]);
|
||||
console.log("Is client connected: ", client.isConnected);
|
||||
|
||||
await client.disconnect();
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
|
||||
Let's try it out:
|
||||
|
||||
```bash
|
||||
$ npm run exec
|
||||
|
||||
> demo@1.0.0 exec C:\work\demo
|
||||
> node -r ts-node/register src/index.ts
|
||||
|
||||
Is client connected: true
|
||||
$
|
||||
```
|
||||
|
||||
**Note**: typically you should have a single instance`FluenceClient` per application since it represents it's identity in the network. You are free to store the instance anywhere you like.
|
||||
|
||||
#### 3. Setting up aqua compiler
|
||||
|
||||
Aqua is the proffered language for the Fluence network. It can be used with javascript-based environments via _npm_ package.
|
||||
|
||||
**Warning: the package requires java to be installed \(it will call "java -jar ... "\)**
|
||||
|
||||
```bash
|
||||
npm install --save-dev @fluencelabs/aqua
|
||||
```
|
||||
|
||||
We will also need the standard library for the language
|
||||
|
||||
```text
|
||||
npm install --save-dev @fluencelabs/aqua-lib
|
||||
```
|
||||
|
||||
Let's add our first aqua file:
|
||||
|
||||
```text
|
||||
aqua (1)
|
||||
┗ demo.aqua (2)
|
||||
node_modules
|
||||
src
|
||||
┣ compiled (3)
|
||||
┗ index.ts
|
||||
package-lock.json
|
||||
package.json
|
||||
tsconfig.json
|
||||
```
|
||||
|
||||
Add two directories, one for aqua sources \(1\) and another for the typescript output \(3\)
|
||||
|
||||
Create a new text file called `demo.aqua` \(2\):
|
||||
|
||||
```text
|
||||
import "@fluencelabs/aqua-lib/builtin.aqua"
|
||||
|
||||
func demo(nodePeerId: PeerId) -> []string:
|
||||
on nodePeerId:
|
||||
res <- Peer.identify()
|
||||
<- res.external_addresses
|
||||
```
|
||||
|
||||
This script will gather the list of external addresses from some node in the network. For more information about the aqua language refer to the aqua documentation.
|
||||
|
||||
The aqua code can now be compiled by using the compiler CLI. We suggest adding a script to the package.json file like so:
|
||||
|
||||
```javascript
|
||||
...
|
||||
"scripts": {
|
||||
"exec": "node -r ts-node/register src/index.ts",
|
||||
"compile-aqua": "aqua -i ./aqua/ -o ./src/compiled"
|
||||
},
|
||||
...
|
||||
```
|
||||
|
||||
Run the compiler:
|
||||
|
||||
```bash
|
||||
npm run compile-aqua
|
||||
```
|
||||
|
||||
A typescript file should be generated like so:
|
||||
|
||||
```text
|
||||
aqua
|
||||
┗ demo.aqua
|
||||
node_modules
|
||||
src
|
||||
┣ compiled
|
||||
┃ ┗ demo.ts <--
|
||||
┗ index.ts
|
||||
package-lock.json
|
||||
package.json
|
||||
tsconfig.json
|
||||
```
|
||||
|
||||
#### 4. Consuming the compiled code
|
||||
|
||||
Using the code generated by the compiler is as easy as calling a function. The compiler generates all the boilerplate needed to send a particle into the network and wraps it into a single call. Note that all the type information and therefore type checking and code completion facilities are there!
|
||||
|
||||
Let's do it!
|
||||
|
||||
```typescript
|
||||
import { createClient } from "@fluencelabs/fluence";
|
||||
import { testNet } from "@fluencelabs/fluence-network-environment";
|
||||
|
||||
import { demo } from "./compiled/demo"; // import the generated file
|
||||
|
||||
async function main() {
|
||||
const client = await createClient(testNet[1]);
|
||||
console.log("Is client connected: ", client.isConnected);
|
||||
|
||||
const otherNode = testNet[2].peerId;
|
||||
const addresses = await demo(client, otherNode); // call it like a normal function in typescript
|
||||
console.log(`Node ${otherNode} is connected to: ${addresses}`);
|
||||
|
||||
await client.disconnect();
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
|
||||
if everything is fine you have similar result:
|
||||
|
||||
```text
|
||||
$ npm run exec
|
||||
|
||||
> demo@1.0.0 exec C:\work\demo
|
||||
> node -r ts-node/register src/index.ts
|
||||
|
||||
Is client connected: true
|
||||
Node 12D3KooWHk9BjDQBUqnavciRPhAYFvqKBe4ZiPPvde7vDaqgn5er is connected to: /ip4/138.197.189.50/tcp/7001,/ip4/138.197.189.50/tcp/9001/ws
|
||||
|
||||
$
|
||||
```
|
||||
|
||||
### Advanced usage
|
||||
|
||||
Fluence JS SDK gives options to register own handlers for aqua vm service calls
|
||||
|
||||
**TBD**
|
||||
|
||||
### References
|
||||
|
||||
* For the list of compiler options see: [https://github.com/fluencelabs/aqua](https://github.com/fluencelabs/aqua)
|
||||
* **Repository with additional examples:** [**https://github.com/fluencelabs/aqua-playground**](https://github.com/fluencelabs/aqua-playground)\*\*\*\*
|
||||
|
||||
**Building A Frontend with JS SDK**
|
||||
|
142
concepts.md
142
concepts.md
@ -1,142 +0,0 @@
|
||||
# Concepts
|
||||
|
||||
The Fluence solution enables a new class of decentralized Web3 solutions providing technical, security and business benefits otherwise not available. In order for solution architects and developers to realize these benefits, a shift in philosophy and implementation is required. With the Fluence tool chain available, developers should find it possible to code meaningful Web3 solutions in short order once an understanding of the core concepts and Aqua is in place.
|
||||
|
||||
The remainder of this section introduces the core concepts underlying the Fluence solution.
|
||||
|
||||
## **Particles**
|
||||
|
||||
Particles are Fluence's secure distributed state medium, i.e., conflict free replication data structures containing application data, workflow scripts and some metadata, that traverse programmatically specified routes in a highly secure manner. That is, _particles_ hop from distributed compute service to distributed compute service across the peer-to-peer network as specified by the application workflow updating along the way.
|
||||
|
||||
Figure 4: Node-Service Perspective Of Particle Workflow
|
||||
|
||||
![](https://i.imgur.com/u4beJgh.png)
|
||||
|
||||
Not surprisingly, particles are an integral part of the Fluence protocol and stack. It is the very decoupling of data + workflow instructions from the service and network components that allows the secure composition of applications from services distributed across a permissionless peer-to-peer network.
|
||||
|
||||
While the application state change resulting from passing a particle "through" a service with respect of the data components is quite obvious, the ensuing state change with respect to the workflow also needs to be recognized, which is handled by the Aqua VM.
|
||||
|
||||
As depicted in Figure 4, a particle traverses to a destination node's Aqua VM where the next execution step is evaluated and, if specified, triggered. That is, the service programmatically specified to operate on the particle's data is called from the Aqua VM, the particle's data and workflow \(step\) are updated and the Aqua VM routes the particle to the next specified destination, which may be on the same, another or the client peer.
|
||||
|
||||
## **Aqua**
|
||||
|
||||
An integral enabler of the Fluence solution is Aqua, an open source language purpose-built to enable developers to ergonomically program distributed networks and applications by composition. Aqua scripts compile to an intermediary representation, called AIR, which execute on the Aqua Virtual Machine, Aqua VM, itself a Wasm module hosted on the Marine interpreter on every peer node.
|
||||
|
||||
Figure 5: From Aqua Script To Particle Execution
|
||||
|
||||
![](.gitbook/assets/image%20%286%29.png)
|
||||
|
||||
Currently, compiled Aqua scripts can be executed from Typescript clients based on [Fluence SDK](https://github.com/fluencelabs/fluence-js). For more information about Aqua, see the [Aqua book](https://doc.fluence.dev/aqua-book/).
|
||||
|
||||
## **Marine**
|
||||
|
||||
Marine is Fluence's generalized Wasm runtime executing Wasm Interface Type \(IT\) modules with Aqua VM compatible interfaces on each peer. Let's unpack.
|
||||
|
||||
Services behave similarly to microservices: They are created on nodes and served by the Marine VM and can _only_ be called by Aqua VM. They also are passive in that they can accept incoming calls but can't initiate an outgoing request without being called.
|
||||
|
||||
Services are
|
||||
|
||||
* comprised of Wasm IT modules that can be composed into applications
|
||||
* developed in Rust for a wasm32-wasi compile target
|
||||
* deployed on one or more nodes
|
||||
* running on the Marine VM which is deployed to every node
|
||||
|
||||
Figure 6: Stylized Execution Flow On Peer
|
||||
|
||||
![](.gitbook/assets/image%20%285%29.png)
|
||||
|
||||
Please note that the Aqua VM is itself a Wasm module running on the Marine VM.
|
||||
|
||||
The [Marine Rust SDK](https://github.com/fluencelabs/marine-rs-sdk) abstracts the Wasm IT implementation detail behind a handy macro that allows developers to easily create Marine VM compatible Wasm modules. In the example below, applying the `marine` macro turns a basic Rust function into a Wasm IT compatible function and enforces the types requirements at the compiler level.
|
||||
|
||||
```rust
|
||||
#[marine]
|
||||
pub fn greeting(name: String) -> String {
|
||||
format!("Hi, {}", name)
|
||||
}
|
||||
```
|
||||
|
||||
## **Services**
|
||||
|
||||
Services are logical constructs instantiated from Wasm modules that contain some business logic and configuration data. That is, services are created, i.e., linked, at the Marine VM runtime level from uploaded Wasm modules and the relevant metadata
|
||||
|
||||
_Blueprints_ are json documents that provide the necessary information to build a service from the associated Wasm modules.
|
||||
|
||||
Figure 7: Service Composition and Execution Model
|
||||
|
||||
![](.gitbook/assets/image%20%287%29.png)
|
||||
|
||||
Please note that services are not capable to accept more than one request at any given time. Consider a service, FooBar, comprised of two functions, foo\(\) and bar\(\) where foo is a longer running function.
|
||||
|
||||
```text
|
||||
-- Stylized FooBar service with two functions
|
||||
-- foo() and bar()
|
||||
-- foo is long-running
|
||||
--- if foo is called before bar, bar is blocked
|
||||
service FooBar("service-id"):
|
||||
bar() -> string
|
||||
foo() -> string --< long running function
|
||||
|
||||
func foobar(node:string, service_id:string, func_name:string) -> string:
|
||||
res: *string
|
||||
on node:
|
||||
BlockedService service_id
|
||||
if func_name == "foo":
|
||||
res <- BlockedService.foo()
|
||||
else:
|
||||
res <- BlockedService.bar()
|
||||
<- res!
|
||||
```
|
||||
|
||||
As long as foo\(\) is running, the entire FooBar service, including bar\(\), is blocked. This has implications with respect to both service granularity and redundancy.
|
||||
|
||||
## **Modules**
|
||||
|
||||
In the Fluence solution, Wasm IT modules take one of three forms:
|
||||
|
||||
* Facade Module: expose the API of the service comprised of one or more modules. Every service has exactly one facade module
|
||||
* Pure Module: perform computations without side-effects
|
||||
* Effector Module: perform at least one computation with a side-effect
|
||||
|
||||
It is important for architects and developers to be aware of their module and services construction with respect to state changes.
|
||||
|
||||
## **Authentication And Permissioning**
|
||||
|
||||
Authentication at the service API level is an inherent feature of the Fluence solution. This fine-grained approach essentially provides [ambient authority](https://en.wikipedia.org/wiki/Ambient_authority) out of the box.
|
||||
|
||||
In the Fluence solution, this is accomplished by a SecurityTetraplet, which is data structure with four data fields:
|
||||
|
||||
```rust
|
||||
struct SecurityTetraplet:
|
||||
peer_id: string
|
||||
service_id: string
|
||||
fn_name: string
|
||||
getter: string
|
||||
```
|
||||
|
||||
SecurityTetraplets are provided with the function call arguments for each \(service\) function call and are checked by the called service. Hence, authentication based on the **\(service caller id == service owner id\)** relation can be established at service ingress and leveraged to build powerful, fine-grained identity and access management solutions enabling true zero trust architectures.
|
||||
|
||||
## **Trust Layer**
|
||||
|
||||
The Fluence protocol offers an alternative to node selection, i.e. connection and permissioning, approaches, such as [Kademlia](https://en.wikipedia.org/wiki/Kademlia), called TrustGraph. A TrustGraph is comprised of subjectively weights assigned to nodes to manage peer connections. TrustGraphs are node operator specific and transitive. That is, a trusted node's trusted neighbors are considered trustworthy.
|
||||
|
||||
{% hint style="info" %}
|
||||
[TrustGraph](https://github.com/fluencelabs/trust-graph) is currently under active development. Please check the repo for progress.
|
||||
{% endhint %}
|
||||
|
||||
# **Application**
|
||||
|
||||
An application is the "frontend" to one or more services and their execution sequence. Applications are developed by coordinating one or more services into a logical compute unit and tend to live outside the Fluence network**,** e.g., the browser as a peer-client. They can be executed in various runtime environments ranging from browsers to backend daemons.
|
||||
|
||||
### **Scaling Applications**
|
||||
|
||||
As discussed previously, decoupling at the network and business logic levels is at the core of the Fluence protocol and provides the major entry points for scaling solutions.
|
||||
|
||||
At the peer-to-peer network level, scaling can be achieved with subnetworks. Subnetworks are currently under development and we will update this section in the near future.
|
||||
|
||||
At the service level, we can achieve scale through parallelization due to the decoupling of resource management from infrastructure. That is, sequential and parallel execution flow logic are an inherent part of Aqua's programming model. In order to be able to achieve concurrency, the target services need to be available on multiple peers as module calls are blocking.
|
||||
|
||||
Figure 8: Stylized Par Execution
|
||||
|
||||
![](.gitbook/assets/image%20%288%29.png)
|
||||
|
@ -1,2 +0,0 @@
|
||||
# Developing Modules And Services
|
||||
|
@ -1,6 +0,0 @@
|
||||
# Overview
|
||||
|
||||
In the Quick Start section we incrementally created a distributed, database-backed request processing application using existing services with Aquamarine. Of course, we left a lot of detail uncovered including where the services we used came from in the first place. In this section, we tackle the very issue of development and deployment of service component.
|
||||
|
||||
Before we proceed, please make sure your Fluence environment is [setup](../recipes_recipes/recipes_setting_up.md) and ready to go. Moreover, we are going to run our own Fluence node to test our services in a network environment. Please refer to the [Running a Local Fluence Node](../tutorials_tutorials/tutorial_run_local_node.md) tutorial if you need support.
|
||||
|
@ -1,19 +0,0 @@
|
||||
# Building The Reward Block Application
|
||||
|
||||
Our project aims to
|
||||
|
||||
* retrieve the latest block height from the Ethereum mainnet,
|
||||
* use that result to retrieve the associated reward block data and
|
||||
* store the result in a SQlite database
|
||||
|
||||
In order to simplify Ethereum data access, we will be using the [Etherscan API](https://etherscan.io/apis). Make sure you got yours ready as we are using two Etherscan endpoints:
|
||||
|
||||
* [eth\_blockNumber](https://api.etherscan.io/api?module=proxy&action=eth_blockNumber&apikey=YourApiKeyToken), which returns the most recent block height as a hex string and
|
||||
* [getBlockReward](https://api.etherscan.io/api?module=block&action=getblockreward&blockno=2165403&apikey=YourApiKeyToken), which returns the block and uncle reward by block height
|
||||
|
||||
Both _eth\_blockNumber_ and _getBlockReward_ need access to remote endpoints and for our purposes, a cUrl service will do just fine. Hence, we ned to implement a curl adapter to access the curl binaries of the node. Moreover, as _eth\_blockNumber_ returns a hex string and _getBlockReward_ requires an integer, we need a hex to int conversion, which we are going to implement as a stand-alone service. Finally, a SQLite adapter is also required, although the sqlite3.wasm module is [readily available](https://github.com/fluencelabs/sqlite/releases) from the Fluence repo.
|
||||
|
||||
The high-level workflow of our application is depicted in Figure 1.
|
||||
|
||||
![Figure 1: Stylized Workflow](../../.gitbook/assets/image%20%282%29.png)
|
||||
|
@ -1,255 +0,0 @@
|
||||
# Additional Concepts
|
||||
|
||||
In the previous sections we obtained block reward data by discovering the latest Ethereum block created. Of course, Ethereum produces a new block about every 13 seconds or so and it would be nice to automate the data acquisition process. One way, of course, would be to, say, cron or otherwise daemonize our frontend application. But where's the fun in that and we'd rather hand that task to the p2p network.
|
||||
|
||||
As we have seen in our AIR workflows, particles travel the path, trigger execution, and update their data. So far, we have only seen services consume previous outputs as \(complete\) inputs, which means that service at workflow sequence s needs to be fairly tightly coupled to service at sequence s-1, which is less than ideal. Luckily, Fluence provides a solution to access certain types of results as j_son paths_.
|
||||
|
||||
## Peer-Based Script Storage And Execution
|
||||
|
||||
As discussed previously, a peer-based ability to "poll" is a valuable feature to some applications. Fluence nodes come with a set of built-in services including the ability to store scripts on a peer with the intent of periodic execution.
|
||||
|
||||
This service, just as all distributed services, is managed by Aquamarine. The AIR script looks like:
|
||||
|
||||
```text
|
||||
; add a script to
|
||||
(call node ("script" "add") [script interval] id)
|
||||
```
|
||||
|
||||
where:
|
||||
|
||||
* _node_ -- takes the peer id parameter
|
||||
* _"script"_ -- is the \(hard-coded\) service id
|
||||
* _script_ -- takes the AIR script as a **string**
|
||||
* _interval_ -- the execution interval in seconds, optional, default is three \(3\) seconds; provide as **string**, e.g. five seconds are expressed as "5"
|
||||
* _id_ -- is the return value
|
||||
|
||||
In addition to the service "add" method, there are also service "list" and service "remove" methods available:
|
||||
|
||||
```text
|
||||
; list
|
||||
(call node ("script" "list") [] list)
|
||||
|
||||
; remove
|
||||
(call node ("script" "remove") [script_id] result)
|
||||
```
|
||||
|
||||
where remove takes the id \(returned by "add"\) and returns a boolean.
|
||||
|
||||
Let's check on any stored services on our local node \(make sure you use your node id\) and as expected, no services have been uploaded for storage and execution.
|
||||
|
||||
```text
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/list_stored_services.clj -d '{"node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17"}'
|
||||
client seed: 5ydZWdJAzMHAGQ2hCVJCa5ByYq7obp2yc9gRD43ajXrZ
|
||||
client peerId: 12D3KooWBgzuiNn5mz1DwqDbqapBf3NSF8mRjSJV1KC3VphjAyWL
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 17986fb7-36e7-4f10-b311-d2512f5fe2e5. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
[]
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'script',
|
||||
function_name: 'list',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
In order to upload the periodic "block to db poll", we can use parts of the _ethqlite\_roundtrip.clj_ script and hard-code the parameters since currently there is no option to separately upload the script and data. Make sure you replace the `node_*`, `service_*` and `api_key` placeholders with your actual values in the file!
|
||||
|
||||
```text
|
||||
; air-scripts/ethqlite_block_committer.clj
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_latest_block") [api_key] hex_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [hex_block_result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_2 (service_2 "hex_to_int") [hex_block_result] int_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [int_block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_block") [api_key int_block_result] block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "update_reward_blocks") [block_result] insert_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [insert_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
```bash
|
||||
# script file to string variable
|
||||
AIR=`cat air-scripts/ethqlite_block_committer.clj`
|
||||
# interval variable in seconds to string variable
|
||||
INT="10"
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/add_stored_service.clj -d '{"node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", "interval":"'"$INT"'", "script":"'"$AIR"'"}'
|
||||
client seed: Cwhf8VuyqPCUPi8keyZAcRVBkaGNLWviHMRwDL2hG8D4
|
||||
client peerId: 12D3KooWJgFCCeHpcEoVyxT5Fmg47ok43MPU7hfT9cNv5R3KeDEw
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: dd3ad854-b10d-4664-846d-42c59c59335f. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
"a1791c0f-084e-4b4d-a85c-a3eb65a18d57" # <= Take note of the storage id !
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'script',
|
||||
function_name: 'add',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
Checking once more for listed services hits pay dirt:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/list_stored_services.clj -d '{"node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17"}'
|
||||
client seed: HpHQc1as9zGdiHaMQzyPDaPWrdMVEvAA8DwdJiAvczWS
|
||||
client peerId: 12D3KooWFiiS7FMo18EbrtWZi38Nwe1SiYCRqcsJNEPtYh28zHNm
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 5fb0af87-310f-4b12-8c73-e044cfd8ef6e. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
[
|
||||
{
|
||||
"failures": 0,
|
||||
"id": "a1791c0f-084e-4b4d-a85c-a3eb65a18d57",
|
||||
"interval": "10s",
|
||||
"owner": "12D3KooWJgFCCeHpcEoVyxT5Fmg47ok43MPU7hfT9cNv5R3KeDEw",
|
||||
"src": "$AIR"
|
||||
}
|
||||
]
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'script',
|
||||
function_name: 'list',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
And we are golden. Give it some time and start checking Ethqlite for latest block and reward info!!
|
||||
|
||||
{% hint style="info" %}
|
||||
Unfortunately, our daemonized service won't work just yet as the current implementation cannot take the \(client\) seed we need in order to get our SQLite write working. It's on the to-do list but if you need it, please contact us and we'll see about juggling priorities.
|
||||
{% endhint %}
|
||||
|
||||
For completeness sake, let's remove the stored service with the following AIR script:
|
||||
|
||||
```bash
|
||||
; remove a script to
|
||||
(call node ("script" "remove") [script_id] result)
|
||||
```
|
||||
|
||||
## Advanced Service Output Access
|
||||
|
||||
As Aquamarine advances a particle's journey through the network, output from a service at workflow sequence s-1 method tends to be the input for a service at sequence s method. For example, the _hex\_to\_int_ method, as used earlier, takes the output from the _get\_latest\_block_ method. With single parameter outputs, this is a pretty straight forward and inherently decoupled dependency relation. However, when result parameters become more complex, such as structs, we still would like to keep services as decoupled as possible.
|
||||
|
||||
Fluence provides this capability by facilitating the conversion of \(Rust\) struct returns into [json values](https://github.com/fluencelabs/aquamarine/blob/master/interpreter-lib/src/execution/boxed_value/jvaluable.rs#L30). This allows json type key-value access to a desired subset of return values. If you got back to the _ethqlite.clj_ script, you may notice some fancy `$`, `!` operators tucked away in the deep recesses of parenthesis stacking. Below the pertinent snippet:
|
||||
|
||||
```text
|
||||
; ethqlite_rountrip.clj
|
||||
; <snip>
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "get_reward_block") [int_block_result] select_result_2)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result_2])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") []) .; coming up next line !!
|
||||
(call sqlite_node (sqlite_service "get_miner_rewards") [select_result_2.$.["block_miner"]!] select_result_3) ; <= Here it is !!
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result_3])
|
||||
)
|
||||
)
|
||||
)
|
||||
; <snip>
|
||||
```
|
||||
|
||||
Before we dive in, let's review the output from the _get\_reward\_block_ method which is part of the ethqlite service:
|
||||
|
||||
```rust
|
||||
// https://github.com/fluencelabs/examples/blob/c508d096e712b7b22aa94641cd6bb7c2fdb67200/multi-service/ethqlite/src/crud.rs#L139
|
||||
#[fce]
|
||||
#[derive(Debug)]
|
||||
// https://github.com/fluencelabs/examples/blob/c508d096e712b7b22aa94641cd6bb7c2fdb67200/multi-service/ethqlite/src/crud.rs#L89
|
||||
pub struct RewardBlock {
|
||||
pub block_number: i64,
|
||||
pub timestamp: i64,
|
||||
pub block_miner: String,
|
||||
pub block_reward: String,
|
||||
}
|
||||
```
|
||||
|
||||
and the input expectations of _get\_miner\_rewards_, also an ethqlite service method, with the following [function](https://github.com/fluencelabs/examples/blob/c508d096e712b7b22aa94641cd6bb7c2fdb67200/multi-service/ethqlite/src/crud.rs#L177) signature: `pub fn get_miner_rewards(miner_address: String) -> MinerRewards`.
|
||||
|
||||
Basically, _get\_miner\_rewards_ wants an Ethereum address as a `String` and in the context of our AIR script we want to get the value from the _get\_reward\_block_ result. Rather than tightly coupling _get\_miner\_rewards_ to _get\_reward\_block_ in terms of, say, the _RewardBlock_ input parameter, we take advantage of the Fluence capability to turn structs into json strings and then supply the relevant key to extract the desired value. Specifically, we use the `$` operator to access the json representation at the desired index and the `!` operator to flatten the value, if desired.
|
||||
|
||||
For example,
|
||||
|
||||
```text
|
||||
(call sqlite_node (sqlite_service "get_miner_rewards") [select_result_2.$.["block_miner"]!]
|
||||
```
|
||||
|
||||
uses the _block\_miner_ key to retrieve the miner address for subsequent consumption. In order to take full advantage of this important feature, developers should return more complex results as FCE structs to prevent tight service coupling. This approach allows for significant reduction of service dependencies, re-writes and re-deployments due to even minor changes in upstream dependencies.
|
||||
|
@ -1,345 +0,0 @@
|
||||
# Ethereum Request Service
|
||||
|
||||
The source code for this section can be found [here](https://github.com/fluencelabs/examples/tree/main/multi-service) and is pretty straight forward with the two Etherscan api endpoints wrapped as public, FCE marked functions:
|
||||
|
||||
```rust
|
||||
use crate::curl_request;
|
||||
use fluence::fce;
|
||||
use fluence::MountedBinaryResult;
|
||||
|
||||
|
||||
fn result_to_string(result:MountedBinaryResult) -> String {
|
||||
if result.is_success() {
|
||||
return String::from_utf8(result.stdout).expect("Found invalid UTF-8");
|
||||
}
|
||||
String::from_utf8(result.stderr).expect("Found invalid UTF-8")
|
||||
}
|
||||
|
||||
#[fce]
|
||||
pub fn get_latest_block(api_key: String) -> String {
|
||||
let url = f!("https://api.etherscan.io/api?module=proxy&action=eth_blockNumber&apikey={api_key}");
|
||||
let header = "-d \"\"";
|
||||
|
||||
let curl_cmd:Vec<String> = vec![header.into(), url.into()];
|
||||
let response = unsafe { curl_request(curl_cmd) };
|
||||
let res = result_to_string(response);
|
||||
let obj = serde_json::from_str::<serde_json::Value>(&res).unwrap();
|
||||
serde_json::from_value(obj["result"].clone()).unwrap()
|
||||
}
|
||||
|
||||
#[fce]
|
||||
pub fn get_block(api_key: String, block_number: u32) -> String {
|
||||
let url = f!("https://api.etherscan.io/api?module=block&action=getblockreward&blockno={block_number}&apikey={api_key}");
|
||||
let header = "-d \"\"";
|
||||
|
||||
let curl_cmd:Vec<String> = vec![header.into(), url];
|
||||
let response = unsafe { curl_request(curl_cmd) };
|
||||
result_to_string(response)
|
||||
}
|
||||
```
|
||||
|
||||
Of course, both functions need to be able to make https calls, which is accomplished by calling \(unsafe\) `curl_request`:
|
||||
|
||||
```rust
|
||||
// main.rs
|
||||
#[macro_use]
|
||||
extern crate fstrings;
|
||||
|
||||
use fluence::{fce, WasmLoggerBuilder};
|
||||
use fluence::MountedBinaryResult as Result;
|
||||
|
||||
mod eth_block_getters;
|
||||
|
||||
fn main() {
|
||||
WasmLoggerBuilder::new().build().ok();
|
||||
}
|
||||
|
||||
#[fce]
|
||||
#[link(wasm_import_module = "curl_adapter")]
|
||||
extern "C" {
|
||||
pub fn curl_request(curl_cmd: Vec<String>) -> Result;
|
||||
}
|
||||
```
|
||||
|
||||
Since we are dealing with Wasm modules, we don't have access to sockets at the module level but may be permissioned to call cUrl at the node level. In order to do that, we need to provide an adapter module. The code from the [cUrl adapter](https://github.com/fluencelabs/examples/tree/main/multi-service/curl_adapter) project illustrates how we're mounting the binary and expose the fce-marked interface for consumption, like above.
|
||||
|
||||
```bash
|
||||
// main.rs
|
||||
#![allow(improper_ctypes)]
|
||||
|
||||
use fluence::fce;
|
||||
use fluence::MountedBinaryResult as Result;
|
||||
|
||||
fn main() {}
|
||||
|
||||
#[fce]
|
||||
pub fn curl_request(curl_cmd: Vec<String>) -> Result {
|
||||
let response = unsafe { curl(curl_cmd.clone()) };
|
||||
log::info!("curl response for {:?} : {:?}", curl_cmd, response);
|
||||
response
|
||||
}
|
||||
|
||||
// mounted_binaries are available to import like this:
|
||||
#[fce]
|
||||
#[link(wasm_import_module = "host")]
|
||||
extern "C" {
|
||||
pub fn curl(cmd: Vec<String>) -> Result;
|
||||
}
|
||||
```
|
||||
|
||||
From both modules, we can now create a service configuration which specifies the name for each module and the permission specification for the mounted binaries:
|
||||
|
||||
```text
|
||||
// Block-Getter-Config.toml
|
||||
modules_dir = "artifacts/"
|
||||
|
||||
[[module]]
|
||||
name = "curl_adapter"
|
||||
|
||||
[module.mounted_binaries]
|
||||
curl = "/usr/bin/curl"
|
||||
|
||||
|
||||
[[module]]
|
||||
name = "block_getter"
|
||||
```
|
||||
|
||||
If you haven't done so already, run `./scripts/build.sh` to compile the projects. Once we have _wasm_ files and the service configuration, we can check out our accomplishments with the REPL:
|
||||
|
||||
```bash
|
||||
fce-repl Block-Getter-Config.toml
|
||||
```
|
||||
|
||||
which gets us in the REPL to call the _interface_ command:
|
||||
|
||||
```bash
|
||||
Welcome to the FCE REPL (version 0.5.2)
|
||||
app service was created with service id = 15b9c3ee-ffbc-4464-bb7f-675a41acf81a
|
||||
elapsed time 111.573048ms
|
||||
|
||||
1> interface
|
||||
Loaded modules interface:
|
||||
Result {
|
||||
ret_code: S32
|
||||
error: String
|
||||
stdout: Array<U8>
|
||||
stderr: Array<U8>
|
||||
}
|
||||
|
||||
curl_adapter:
|
||||
fn curl_request(curl_cmd: Array<String>) -> Result
|
||||
|
||||
block_getter:
|
||||
fn get_block(api_key: String, block_number: U32) -> String
|
||||
fn get_latest_block(api_key: String) -> String
|
||||
|
||||
2>
|
||||
```
|
||||
|
||||
Checking the available interfaces, shows the **public** interfaces to our respective Wasm modules, which are ready for calling:
|
||||
|
||||
```bash
|
||||
> call curl_adapter curl_request [["-sS", "https://google.com"]]
|
||||
result: Object({"error": String(""), "ret_code": Number(0), "stderr": Array([]), "stdout": Array([Number(60), Number(72), Number(84), Number(77), Number(76), Number(62), Number(60), Number(72), Number(69), Number(65), N
|
||||
<snip>
|
||||
, Number(72), Number(84), Number(77), Number(76), Number(62), Number(13), Number(10)])})
|
||||
elapsed time: 328.965523ms
|
||||
```
|
||||
|
||||
As implemented, the raw cUrl call returns a [MountedBinaryResult](https://github.com/fluencelabs/rust-sdk/blob/c2fec5939fc17dcc227a78c7c8030549a6ff193f/crates/main/src/mounted_binary.rs) and we can see the corresponding _struct_ at the top of our `fce-repl` interfaces output. Looking through the return object, we see the standard pipe approach in place and find our query result in the stdout pipe. Of course, we are mostly interested in using cUrl from other modules as part of our service, such as getting the most recently produced block and its corresponding data:
|
||||
|
||||
```bash
|
||||
3> call block_getter get_latest_block ["MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH"]
|
||||
result: String("0xb7eeb3")
|
||||
elapsed time: 559.991486ms
|
||||
```
|
||||
|
||||
and with some cognitive gymnastics we convert 0xb7eeb3 to 12054195:
|
||||
|
||||
```bash
|
||||
4> call block_getter get_block ["MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH", 12054195]
|
||||
result: String("{\"status\":\"1\",\"message\":\"OK\",\"result\":{\"blockNumber\":\"12054195\",\"timeStamp\":\"1615957734\",\"blockMiner\":\"0x99c85bb64564d9ef9a99621301f22c9993cb89e3\",\"blockReward\":\"2000000000000000000\",\"uncles\":[],\"uncleInclusionReward\":\"0\"}}")
|
||||
elapsed time: 578.485579ms
|
||||
```
|
||||
|
||||
All good but please note that your latest block data is going to be significantly different from what's use here. Regardless, manual conversions are really not all that productive and that's why we implemented a [hex\_converter](https://github.com/fluencelabs/examples/tree/main/multi-service/hex_converter) module. Let's update our service config to:
|
||||
|
||||
```bash
|
||||
// Block-Getter-With-Converter-Config.toml
|
||||
modules_dir = "artifacts/"
|
||||
|
||||
[[module]]
|
||||
name = "curl_adapter"
|
||||
|
||||
[module.mounted_binaries]
|
||||
curl = "/usr/bin/curl"
|
||||
|
||||
|
||||
[[module]]
|
||||
name = "block_getter"
|
||||
|
||||
[[module]]
|
||||
name = "hex_converter"
|
||||
```
|
||||
|
||||
and running `fce-repl` with _Block-Getter-With-Converter-Config.toml_ lists the interface for the _hex\_converter_ module. So far, so good. Using the previously generated hex string, yields the expected conversion result:
|
||||
|
||||
```bash
|
||||
Welcome to the FCE REPL (version 0.5.2)
|
||||
app service was created with service id = 09bfcff0-67dd-44c2-a677-de5a7a0c6383
|
||||
elapsed time 176.472631ms
|
||||
|
||||
1> interface
|
||||
Loaded modules interface:
|
||||
Result {
|
||||
ret_code: S32
|
||||
error: String
|
||||
stdout: Array<U8>
|
||||
stderr: Array<U8>
|
||||
}
|
||||
|
||||
hex_converter:
|
||||
fn hex_to_int(data: String) -> U64
|
||||
|
||||
block_getter:
|
||||
fn get_latest_block(api_key: String) -> String
|
||||
fn get_block(api_key: String, block_number: U32) -> String
|
||||
|
||||
curl_adapter:
|
||||
fn curl_request(curl_cmd: Array<String>) -> Result
|
||||
|
||||
2> call hex_converter hex_to_int ["0xb7eeb3"]
|
||||
result: Number(12054195)
|
||||
elapsed time: 120.34µs
|
||||
|
||||
3>
|
||||
```
|
||||
|
||||
Before we review the SQLite code, let's deploy our two services to the local node with the `fldist` tool. Make sure you got the node id and address for **your** local Fluence node:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 new_service --ms artifacts/curl_adapter.wasm:config/curl_cfg.json artifacts/block_getter.wasm:config/block_getter_cfg.json --name EthGetters
|
||||
client seed: 4mp3sXX5FR9heeuqFtfRkq5GRqNJFQ8TvGCZ94PoSvQr
|
||||
client peerId: 12D3KooWBdvur9HwahxMaGN2yrYDiofVD4GDBHivLtJwxwBuyzcr
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
uploading blueprint EthGetters to node 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 via client 12D3KooWBdvur9HwahxMaGN2yrYDiofVD4GDBHivLtJwxwBuyzcr
|
||||
service id: ca0eceb3-871f-440e-aff1-0a186321437d
|
||||
service created successfully
|
||||
```
|
||||
|
||||
and for the hex conversion service:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 new_service --ms artifacts/hex_converter.wasm:config/hex_converter_cfg.json --name HexConverter
|
||||
client seed: BGvUGBvYifJf8oHS6rA7UmBc7Cs8EeaJxie8eFyP7YmY
|
||||
client peerId: 12D3KooWJLXYiXwmmWPEv7kdQ8nYb646L96XyyTgkrrMAXen3FQy
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
uploading blueprint HexConverter to node 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 via client 12D3KooWJLXYiXwmmWPEv7kdQ8nYb646L96XyyTgkrrMAXen3FQy
|
||||
service id: 36043704-4d40-4c74-a1bd-3abbde28305d
|
||||
service created successfully
|
||||
```
|
||||
|
||||
Our first service, _EthGetters_, is comprised of two modules and the the second service, _HexConverter_, of one module. With those two services available, we have everything we need to get the block reward information for the most recently block. In order to get us there, we write a small AIR script to coordinate the services into an app:
|
||||
|
||||
```text
|
||||
; latest_block_reward.clj
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_latest_block") [api_key] hex_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [hex_result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_2 (service_2 "hex_to_int") [hex_result] int_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [int_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_block") [api_key int_result] block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
As always, we use the `fldist` _run\_air_ command:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/latest_reward_block.clj -d '{"service_1": "ca0eceb3-871f-440e-aff1-0a186321437d", "service_2": "36043704-4d40-4c74-a1bd-3abbde28305d", "node_1":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", "node_2": "12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", "api_key":"your-api-key"}'
|
||||
client seed: 9xfs3P1r5QmBxCohcA4xmpE448Q64c14jmYn4XNJZEiz
|
||||
client peerId: 12D3KooWNfA3Za3bvfHutWhvtZxC5NWdbaujoFZkR8bh2WVTZzw3
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 930ea13f-1474-4501-862a-ca5fad22ee42. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
"0xb7fe13"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'ca0eceb3-871f-440e-aff1-0a186321437d',
|
||||
function_name: 'get_latest_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
12058131
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '36043704-4d40-4c74-a1bd-3abbde28305d',
|
||||
function_name: 'hex_to_int',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
"{\"status\":\"1\",\"message\":\"OK\",\"result\":{\"blockNumber\":\"12058131\",\"timeStamp\":\"1616010177\",\"blockMiner\":\"0x829bd824b016326a401d083b33d092293333a830\",\"blockReward\":\"6159144598411626490\",\"uncles\":[{\"miner\":\"0xe72f79190bc8f92067c6a62008656c6a9077f6aa\",\"unclePosition\":\"0\",\"blockreward\":\"500000000000000000\"}],\"uncleInclusionReward\":\"62500000000000000\"}}"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'ca0eceb3-871f-440e-aff1-0a186321437d',
|
||||
function_name: 'get_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
Right on! Our two services coordinate into the intended application returning the reward data for the latest block. Before we move on, locate the corresponding services on the Fluence testnet via the [ dashboard](https://dash.fluence.dev/), update your command-line with the appropriate service and node ids and run the same AIR script. Congratulations, you just run an app coordinated by distributed services!
|
||||
|
@ -1,10 +0,0 @@
|
||||
# A Little More AIR, Please
|
||||
|
||||
Before you go off becoming a prominent Fluence p2p application developer gazillionaire, there are a couple more AIR functions you should be aware off: par and fold.
|
||||
|
||||
## Disrtributed Workflow Parallelization
|
||||
|
||||
By now may have come to realize that building distributed "applications" is a tad different than
|
||||
|
||||
## Distributed List Processing with fold
|
||||
|
@ -1,251 +0,0 @@
|
||||
# Blocks To Database
|
||||
|
||||
It's been a long time coming but finally, we are ready to save data in SQLIte by simply coordinating the various services we already deployed into one big-ass AIR script:
|
||||
|
||||
```text
|
||||
; ethqlite_rountrip.clj
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_latest_block") [api_key] hex_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [hex_block_result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_2 (service_2 "hex_to_int") [hex_block_result] int_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [int_block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_block") [api_key int_block_result] block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "update_reward_blocks") [block_result] insert_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [insert_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "get_latest_reward_block") [] select_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "get_reward_block") [int_block_result] select_result_2)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result_2])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "get_miner_rewards") [select_result_2.$.["block_miner"]!] select_result_3)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result_3])
|
||||
)
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
The script extends our previous incarnation by adding only one more method: `update_reward_blocks`, and a few testing calls, i.e., query the table. We need to gather our node and service ids \(which are different for you!\) to update our json data argument for the `fldist` call:
|
||||
|
||||
```bash
|
||||
-d '{"service_1":"ca0eceb3-871f-440e-aff1-0a186321437d", \
|
||||
"node_1":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", \
|
||||
"service_2":"36043704-4d40-4c74-a1bd-3abbde28305d", \
|
||||
"node_2": "12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", \
|
||||
"sqlite_service":"470fcaba-6834-4ccf-ac0c-4f6494e9e77b", \
|
||||
"sqlite_node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", \
|
||||
"api_key": "MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH"}'
|
||||
```
|
||||
|
||||
and run the AIR script with the revised `fldist` command:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/ethqlite_roundtrip.clj -d '{"service_1":"ca0eceb3-871f-440e-aff1-0a186321437d", "node_1":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17","service_2":"36043704-4d40-4c74-a1bd-3abbde28305d", "node_2": "12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", "sqlite_service":"470fcaba-6834-4ccf-ac0c-4f6494e9e77b", "sqlite_node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", "api_key": "MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH"}' -s H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client seed: H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client peerId: 12D3KooWKphxxaXofYzC2TsN79RHZVubjmutKVdPUxVMHY3ZsVww
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 5ce2dcf0-2d4d-40ec-8cef-d5a0cea4f0e7. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
"0xb807a1"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'ca0eceb3-871f-440e-aff1-0a186321437d',
|
||||
function_name: 'get_latest_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
12060577
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '36043704-4d40-4c74-a1bd-3abbde28305d',
|
||||
function_name: 'hex_to_int',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
"{\"status\":\"1\",\"message\":\"OK\",\"result\":{\"blockNumber\":\"12060577\",\"timeStamp\":\"1616042932\",\"blockMiner\":\"0x2f731c3e8cd264371ffdb635d07c14a6303df52a\",\"blockReward\":\"3622523288217263710\",\"uncles\":[],\"uncleInclusionReward\":\"0\"}}"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'ca0eceb3-871f-440e-aff1-0a186321437d',
|
||||
function_name: 'get_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
{
|
||||
"err_str": "",
|
||||
"success": 1
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '470fcaba-6834-4ccf-ac0c-4f6494e9e77b',
|
||||
function_name: 'update_reward_blocks',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
{
|
||||
"block_miner": "\"0x2f731c3e8cd264371ffdb635d07c14a6303df52a\"",
|
||||
"block_number": 12060577,
|
||||
"block_reward": "3622523288217263710",
|
||||
"timestamp": 1616042932
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '470fcaba-6834-4ccf-ac0c-4f6494e9e77b',
|
||||
function_name: 'get_latest_reward_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
{
|
||||
"block_miner": "\"0x2f731c3e8cd264371ffdb635d07c14a6303df52a\"",
|
||||
"block_number": 12060577,
|
||||
"block_reward": "3622523288217263710",
|
||||
"timestamp": 1616042932
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '470fcaba-6834-4ccf-ac0c-4f6494e9e77b',
|
||||
function_name: 'get_reward_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
{
|
||||
"miner_address": "\"0x2f731c3e8cd264371ffdb635d07c14a6303df52a\"",
|
||||
"rewards": [
|
||||
"3622523288217263710"
|
||||
]
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '470fcaba-6834-4ccf-ac0c-4f6494e9e77b',
|
||||
function_name: 'get_miner_rewards',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
And that's a wrap!
|
||||
|
||||
In summary, we have developed and deployed multiple Fluence services to store Ethereum reward block data in a SQLite as a service database and used Aquamarine to coordinate those services into applications. See Figure 2 below.
|
||||
|
||||
Figure 2: Aquamarine Application Creation From Modules And Services
|
||||
|
||||
![](../../.gitbook/assets/image.png)
|
||||
|
||||
Working through this project hopefully made it quite clear that the combination of distributed network services and Aquamarine makes for the easy and expedient creation of powerful applications by composition and coordination. Moreover, it showcases the power of reusability and hints at the \(economic\) rent available to developers. Presumably not entirely unexpectedly, there is a bit more to discover, a little more power to be unleashed. In the next section we touch upon two additional concepts to extend our capabilities: How to incorporate peer-based script execution into our workflow and how to utilize advanced, in-flow \(or in-transit\) results processing.
|
||||
|
@ -1,397 +0,0 @@
|
||||
# SQLite Service
|
||||
|
||||
All our work so far has been about gathering block reward information for the latest block:
|
||||
|
||||
```javascript
|
||||
// Block reward info on Wednesday, March 17, at 2021 7:42:57 PM GMT
|
||||
// for block 12058131:
|
||||
"{\"status\":\"1\",\"message\":\"OK\",\"result\":{\"blockNumber\":\"12058131\",
|
||||
\"timeStamp\":\"1616010177\",\"blockMiner\":\"0x829bd824b016326a401d083b33d092293333a830\",
|
||||
\"blockReward\":\"6159144598411626490\",\"uncles\":[
|
||||
{\"miner\":\"0xe72f79190bc8f92067c6a62008656c6a9077f6aa\",\"unclePosition\":\"0\",
|
||||
\"blockreward\":\"500000000000000000\"}],
|
||||
\"uncleInclusionReward\":\"62500000000000000\"}}"
|
||||
```
|
||||
|
||||
which [happens about every 13 seconds or so on mainnet](https://etherscan.io/chart/blocktime) and every four seconds on Kovan. Rather than stashing the block reward results in a frontend-based storage solution, we deploy an SQLite service as our peer-to-peer hosted _Ethqlite_ service. Please see the [ethqlite repo](https://github.com/fluencelabs/examples/tree/main/multi-service/ethqlite) for the code.
|
||||
|
||||
To get SQLite as a service, we build our service from two modules: the [ethqlite repo](https://github.com/fluencelabs/examples/tree/main/multi-service/ethqlite) and the [Fluence sqlite](https://github.com/fluencelabs/sqlite) Wasm module, which we can build or pickup as a wasm files from the [releases](https://github.com/fluencelabs/sqlite/releases). This largely, but not entirely, mirrors what we did with the cUrl service: build the service by providing an adapter to the binary. Unlike the cUrl binary, we are bringing our own sqlite binary, i.e., _sqlite3.wasm_, with us.
|
||||
|
||||
This leaves us to code our _ethqlite_ module with respect to desired CRUD interfaces and security. As [previously](../../quick_start/quick_start_add_persistence/quick_start_persistence_setup.md) discussed, we want writes to the sqlite services to be privileged, which implies that we need to own the service and have the client seed to manage authentication and ambient authorization. Specifically, we can implement a rudimentary authorization system where authentication implies authorization \(to write\). The `is_owner` function in the _ethqlite_ repo does exactly that: if the caller can prove ownership by providing a valid client seed, than we have a true condition equating write-privileged ownership with the caller identity:
|
||||
|
||||
```rust
|
||||
// auth.rs
|
||||
use fluence::{fce, CallParameters};
|
||||
use::fluence;
|
||||
use crate::get_connection;
|
||||
|
||||
pub fn is_owner() -> bool {
|
||||
let meta = fluence::get_call_parameters();
|
||||
let caller = meta.init_peer_id;
|
||||
let owner = meta.service_creator_peer_id;
|
||||
|
||||
caller == owner
|
||||
}
|
||||
|
||||
#[fce]
|
||||
pub fn am_i_owner() -> bool {
|
||||
is_owner()
|
||||
}
|
||||
```
|
||||
|
||||
where the `fluence::get_call_parameters` is a FCE function returning the populated _CallParameter_ struct defined in the [Fluence Rust SDK](https://github.com/fluencelabs/rust-sdk/blob/71591f412cb65879d74e8c38838e827ab74d8802/crates/main/src/call_parameters.rs) provides us with the salient creator and caller parameters at runtime.
|
||||
|
||||
While the majority of the CRUD operations in _crud.rs_ are standard fare except, the auth & auth check appears in update\_reward\_blocks:
|
||||
|
||||
```rust
|
||||
// crud.rs
|
||||
#[fce]
|
||||
pub fn update_reward_blocks(data_string: String) -> UpdateResult {
|
||||
if !is_owner() { // <= auth & auth check !!
|
||||
return UpdateResult { success:false, err_str: "You are not the owner".into()};
|
||||
}
|
||||
|
||||
let obj:serde_json::Value = serde_json::from_str(&data_string).unwrap();
|
||||
let obj = obj["result"].clone();
|
||||
|
||||
if obj["blockNumber"] == serde_json::Value::Null {
|
||||
return UpdateResult { success:false, err_str: "Empty reward block string".into()};
|
||||
}
|
||||
|
||||
let conn = get_connection();
|
||||
|
||||
let insert = "insert or ignore into reward_blocks values(?, ?, ?, ?)";
|
||||
let mut ins_cur = conn.prepare(insert).unwrap().cursor();
|
||||
<snip>
|
||||
```
|
||||
|
||||
That is, any non-permissioned call is prevented from write operations and an error message is returned. Please note that in [main.rs](https://github.com/fluencelabs/examples/blob/main/multi-service/ethqlite/src/main.rs) we have a few admin convenience functions that are also protected by the `is_owner` guard.
|
||||
|
||||
## Building and Deploying Ethqlite
|
||||
|
||||
Our _build.sh_ script should look quite familiar with the possible exception of downloading the already built sqlite3.wasm file:
|
||||
|
||||
```bash
|
||||
# build.sh
|
||||
#!/bin/sh
|
||||
|
||||
fce build --release
|
||||
|
||||
rm artifacts/*
|
||||
cp target/wasm32-wasi/release/ethqlite.wasm artifacts/
|
||||
wget https://github.com/fluencelabs/sqlite/releases/download/v0.10.0_w/sqlite3.wasm
|
||||
mv sqlite3.wasm artifacts/
|
||||
```
|
||||
|
||||
Run `./build.sh` and check the artifacts for the expected wasm files
|
||||
|
||||
Like all Fluence services, Ethqlite needs a [service configuration](https://github.com/fluencelabs/examples/blob/main/multi-service/ethqlite/Config.toml) file, which looks a little more involved than what we have seen so far.
|
||||
|
||||
```text
|
||||
modules_dir = "artifacts/"
|
||||
|
||||
[[module]]
|
||||
name = "sqlite3"
|
||||
mem_pages_count = 100
|
||||
logger_enabled = false
|
||||
|
||||
[module.wasi]
|
||||
preopened_files = ["/tmp"]
|
||||
mapped_dirs = { "tmp" = "/tmp" }
|
||||
|
||||
|
||||
|
||||
[[module]]
|
||||
name = "ethqlite"
|
||||
mem_pages_count = 1
|
||||
logger_enabled = false
|
||||
|
||||
[module.wasi]
|
||||
preopened_files = ["/tmp"]
|
||||
mapped_dirs = { "tmp" = "/tmp" }
|
||||
```
|
||||
|
||||
Let's break it down:
|
||||
|
||||
* the first \[\[module\]\] section
|
||||
* specifies the _sqlite3.wasm_ module we pulled from the repo,
|
||||
* allocates memory, where each page is about 64KB, and
|
||||
* permissions and maps node file access
|
||||
* the second section is for our business logic \(CRUD\) adapter module where, again, we allocate the memory and permission and map file access.
|
||||
|
||||
We can now fire up `fce-repl`:
|
||||
|
||||
```bash
|
||||
fce-repl Config.toml
|
||||
Welcome to the FCE REPL (version 0.5.2)
|
||||
app service was created with service id = 9b923db7-3747-41ab-b1fd-66bd0ccd9f68
|
||||
elapsed time 916.210305ms
|
||||
|
||||
1> interface
|
||||
Loaded modules interface:
|
||||
UpdateResult {
|
||||
success: I32
|
||||
err_str: String
|
||||
}
|
||||
RewardBlock {
|
||||
block_number: S64
|
||||
timestamp: S64
|
||||
block_miner: String
|
||||
block_reward: String
|
||||
}
|
||||
InitResult {
|
||||
success: I32
|
||||
err_msg: String
|
||||
}
|
||||
MinerRewards {
|
||||
miner_address: String
|
||||
rewards: Array<String>
|
||||
}
|
||||
DBOpenDescriptor {
|
||||
ret_code: S32
|
||||
db_handle: U32
|
||||
}
|
||||
DBPrepareDescriptor {
|
||||
ret_code: S32
|
||||
stmt_handle: U32
|
||||
tail: U32
|
||||
}
|
||||
DBExecDescriptor {
|
||||
ret_code: S32
|
||||
err_msg: String
|
||||
}
|
||||
|
||||
ethqlite:
|
||||
fn init_service() -> InitResult
|
||||
fn get_miner_rewards(miner_address: String) -> MinerRewards
|
||||
fn owner_nuclear_reset() -> I32
|
||||
fn get_reward_block(block_number: U32) -> RewardBlock
|
||||
fn update_reward_blocks(data_string: String) -> UpdateResult
|
||||
fn get_latest_reward_block() -> RewardBlock
|
||||
fn am_i_owner() -> I32
|
||||
|
||||
sqlite3:
|
||||
fn sqlite3_reset(stmt_handle: U32) -> S32
|
||||
<snip>
|
||||
fn sqlite3_column_blob(stmt_handle: U32, icol: S32) -> Array<U8>
|
||||
```
|
||||
|
||||
and see all the public Fluence interfaces including the ones from the _sqlite3.wasm_ module. Let's upload the service to the local network:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 new_service --ms ethqlite/artifacts/sqlite3.wasm:ethqlite/sqlite3_cfg.json ethqlite/artifacts/ethqlite.wasm:ethqlite/ethqlite_cfg.json --name EthQlite
|
||||
client seed: 7VqRt2kXWZ15HABKh1hS4kvGfRcBA69cYuzV1Rwm3kHv
|
||||
client peerId: 12D3KooWCzWm4xBv7nApuK8vNLSbKKYV36kvkz3ywqj5xcjscnz9
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
uploading blueprint EthQlite to node 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 via client 12D3KooWCzWm4xBv7nApuK8vNLSbKKYV36kvkz3ywqj5xcjscnz9
|
||||
service id: fb9ba691-c0fc-4500-88cc-b74f3b281088
|
||||
service created successfully
|
||||
```
|
||||
|
||||
Now that we crated the service on our local node, let's make sure that we have the necessary owner privileges. First, we create a little AIR script that calls the _am\_i\_owner_ function from thee ethqlite service:
|
||||
|
||||
```text
|
||||
; am_i_owner.clj
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service "am_i_owner") [] result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
and run it with the `fldist` tool:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/am_i_owner.clj -d '{"service":"fb9ba691-c0fc-4500-88cc-b74f3b281088", "node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17"}'
|
||||
client seed: 3J8BqpGTQ1Ujbr8dvnpTxfr5EUneHf9ZwW84ru9sNmj7
|
||||
client peerId: 12D3KooW9z5hBDY6cXnkEGraiPFn6hJ3VstqAkVaAM7oThTiWVjL
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: efa37779-e3aa-4353-b63d-12b444b6366b. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
0
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'fb9ba691-c0fc-4500-88cc-b74f3b281088',
|
||||
function_name: 'am_i_owner',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
As discussed earlier, the service needs some proof that we have owner privileges, which we can provide by adding the client seed, `-s`, to our call parameters:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/am_i_owner.clj -d '{"service":"fb9ba691-c0fc-4500-88cc-b74f3b281088", "node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17"}' -s 7VqRt2kXWZ15HABKh1hS4kvGfRcBA69cYuzV1Rwm3kHv
|
||||
client seed: 7VqRt2kXWZ15HABKh1hS4kvGfRcBA69cYuzV1Rwm3kHv
|
||||
client peerId: 12D3KooWCzWm4xBv7nApuK8vNLSbKKYV36kvkz3ywqj5xcjscnz9
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: f0371615-7d75-4971-84a9-3111b8263de7. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
1
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'fb9ba691-c0fc-4500-88cc-b74f3b281088',
|
||||
function_name: 'am_i_owner',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
and all is well. So where does that client seed _7VqRt2kXWZ15HABKh1hS4kvGfRcBA69cYuzV1Rwm3kHv_ come from ? The easy answer is that we copied it from the service creation return values -- line 2 above. But that doesn't really answer the question. The more involved answer is that every developer should have one or more cryptographic key pairs from which the client seed is derived. Moreover, creating a new service, the client seed should be specified but if not, the system creates one instead as above.
|
||||
|
||||
The easiest way to get a keypair and seed is from the `fldist` tool:
|
||||
|
||||
```bash
|
||||
fldist create_keypair
|
||||
client seed: 8LKYUmsWkMSiHBxo8deXyNJD3wXutq265TSTcmmtgQTJ
|
||||
client peerId: 12D3KooWRtrFyYjis4qQpC4kHcJWbtpM4mZgLYBoDn93eXJEGtVH
|
||||
relay peerId: 12D3KooWBUJifCTgaxAUrcM9JysqCcS4CS8tiYH5hExbdWCAoNwb
|
||||
{
|
||||
id: '12D3KooWKphxxaXofYzC2TsN79RHZVubjmutKVdPUxVMHY3ZsVww',
|
||||
privKey: 'CAESQO/TcX2DkTukK6XxJUc/2U6gqOLVza5PRWM2FhXfJ1qilKtA6qsHx0Rdibwxsg4Vh7JjTfRfMXSlLJphGCOb7zI=',
|
||||
pubKey: 'CAESIJSrQOqrB8dEXYm8MbIOFYeyY030XzF0pSyaYRgjm+8y',
|
||||
seed: 'H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V'
|
||||
}
|
||||
```
|
||||
|
||||
So let's re-deploy the Ethqlite service and specify the client seed at creation time:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 new_service --ms ethqlite/artifacts/sqlite3.wasm:ethqlite/sqlite3_cfg.json ethqlite/artifacts/ethqlite.wasm:ethqlite/ethqlite_cfg.json --name EthQliteSecure -s H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client seed: H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client peerId: 12D3KooWKphxxaXofYzC2TsN79RHZVubjmutKVdPUxVMHY3ZsVww
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
uploading blueprint EthQliteSecure to node 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 via client 12D3KooWKphxxaXofYzC2TsN79RHZVubjmutKVdPUxVMHY3ZsVww
|
||||
service id: 470fcaba-6834-4ccf-ac0c-4f6494e9e77b
|
||||
service created successfully
|
||||
```
|
||||
|
||||
Updating the call parameters to reflect the new service id and client seed confirms our ownership over the service:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/am_i_owner.clj -d '{"service":"470fcaba-6834-4ccf-ac0c-4f6494e9e77b", "node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17"}' -s H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client seed: H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client peerId: 12D3KooWKphxxaXofYzC2TsN79RHZVubjmutKVdPUxVMHY3ZsVww
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 6d8c158b-d998-44ca-9d4c-255ce4b9cd21. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
1
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '470fcaba-6834-4ccf-ac0c-4f6494e9e77b',
|
||||
function_name: 'am_i_owner',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
Back to our task at hand: persisting reward block data to our sqlite as a service. Looking over the source code, we know that in order to accomplish persistence, we need to:
|
||||
|
||||
* init the database: `pub fn init_service() -> InitResult`
|
||||
* provide reward data : `pub fn update_reward_blocks(data_string: String) -> UpdateResult`
|
||||
|
||||
Initializing Ethqlite for the most part is a one time event, so we'll do it right now and outside of our recurring block discovery and commit workflow with another small AIR script:
|
||||
|
||||
```text
|
||||
; ethqlite_init.clj
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service "init_service") [] result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
which we deploy to the node with the `fldist` tool:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/ethqlite_init.clj -d '{"service":"470fcaba-6834-4ccf-ac0c-4f6494e9e77b", "node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17"}' -s H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client seed: H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client peerId: 12D3KooWKphxxaXofYzC2TsN79RHZVubjmutKVdPUxVMHY3ZsVww
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 2fb4a366-6f40-46c1-9329-d77c6d03dfad. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
{
|
||||
"err_msg": "",
|
||||
"success": 1
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '470fcaba-6834-4ccf-ac0c-4f6494e9e77b',
|
||||
function_name: 'init_service',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
If you run the init script again, you will receive an error _"Service already initiated"_, so we can be reasonably confident our code is working and it looks like our Ethqlite service is up and running on the local node.
|
||||
|
||||
Due to the security concerns for our database, it is not advisable, or even possible, to use an already deployed Sqlite service from the Fluence Dashboard. Instead, we deploy our own instance with our own \(secret\) client seed. To determine which network nodes are available, run:
|
||||
|
||||
```bash
|
||||
fldist --env testnet env
|
||||
client seed: Cj4Wpy5y955o2N3T8Hs5myRoFGhBaBhytCdsYeyFLQPw
|
||||
client peerId: 12D3KooWQg8cyj4z8Bv4rGq1PeXL1XKEQd6Z2CCFguy9D4NnLaKm
|
||||
relay peerId: 12D3KooWBUJifCTgaxAUrcM9JysqCcS4CS8tiYH5hExbdWCAoNwb
|
||||
/dns4/net01.fluence.dev/tcp/19001/wss/p2p/12D3KooWEXNUbCXooUwHrHBbrmjsrpHXoEphPwbjQXEGyzbqKnE9
|
||||
/dns4/net01.fluence.dev/tcp/19990/wss/p2p/12D3KooWMhVpgfQxBLkQkJed8VFNvgN4iE6MD7xCybb1ZYWW2Gtz
|
||||
/dns4/net02.fluence.dev/tcp/19001/wss/p2p/12D3KooWHk9BjDQBUqnavciRPhAYFvqKBe4ZiPPvde7vDaqgn5er
|
||||
/dns4/net03.fluence.dev/tcp/19001/wss/p2p/12D3KooWBUJifCTgaxAUrcM9JysqCcS4CS8tiYH5hExbdWCAoNwb
|
||||
/dns4/net04.fluence.dev/tcp/19001/wss/p2p/12D3KooWJbJFaZ3k5sNd8DjQgg3aERoKtBAnirEvPV8yp76kEXHB
|
||||
/dns4/net05.fluence.dev/tcp/19001/wss/p2p/12D3KooWCKCeqLPSgMnDjyFsJuWqREDtKNHx1JEBiwaMXhCLNTRb
|
||||
/dns4/net06.fluence.dev/tcp/19001/wss/p2p/12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH
|
||||
/dns4/net07.fluence.dev/tcp/19001/wss/p2p/12D3KooWBSdm6TkqnEFrgBuSkpVE3dR1kr6952DsWQRNwJZjFZBv
|
||||
/dns4/net08.fluence.dev/tcp/19001/wss/p2p/12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H
|
||||
/dns4/net09.fluence.dev/tcp/19001/wss/p2p/12D3KooWF7gjXhQ4LaKj6j7ntxsPpGk34psdQicN2KNfBi9bFKXg
|
||||
/dns4/net10.fluence.dev/tcp/19001/wss/p2p/12D3KooWB9P1xmV3c7ZPpBemovbwCiRRTKd3Kq2jsVPQN4ZukDfy
|
||||
```
|
||||
|
||||
which lists the available testnet peers. Pick one, update the node-id parameter and drop the node-addr parameter in your deployment command-line, upload the new ethqlite service and initiate it. Congrat's, you are now the proud maker of a Fluence testnet Ehqlite service!
|
||||
|
||||
Now it is time to get block data into the database.
|
||||
|
@ -1,2 +0,0 @@
|
||||
# Recap
|
||||
|
@ -1,320 +0,0 @@
|
||||
# From Module To Service
|
||||
|
||||
In Fluence, a service is based on one or more [Wasm](https://webassembly.org/) modules suitable to be deployed to the Fluence Compute Engine \(FCE\). In order to develop our modules, we use Rust and the [Fluence Rust SDK](https://github.com/fluencelabs/rust-sdk).
|
||||
|
||||
## Preliminaries
|
||||
|
||||
The general process to create a Fluence \(module\) project is to:
|
||||
|
||||
```bash
|
||||
cargo +nightly create your_module_name --release
|
||||
```
|
||||
|
||||
and add the [binary target](https://doc.rust-lang.org/cargo/reference/cargo-targets.html#binaries) and [Flunece Rust SDK](https://crates.io/crates/fce) to the Cargo.toml:
|
||||
|
||||
```text
|
||||
<snip>
|
||||
|
||||
[[bin]] # <- binary target
|
||||
name = "<your_module_name>"
|
||||
path = "src/main.rs"
|
||||
|
||||
[dependencies]
|
||||
fluence = { version = "=0.5.0", features = ["logger"] }
|
||||
log = "0.4.14"
|
||||
```
|
||||
|
||||
## Developing A Simple Wasm Module
|
||||
|
||||
Let's build a simple greeting module to verify our setup and quickly go through the steps we need to complete to build a simple service.
|
||||
|
||||
```bash
|
||||
cargo +nightly new greeting --release
|
||||
cd greeting
|
||||
```
|
||||
|
||||
and update _main.rs_:
|
||||
|
||||
```rust
|
||||
use fluence::fce; // 1
|
||||
use fluence::module_manifest; // 2
|
||||
|
||||
module_manifest!(); // 3
|
||||
|
||||
pub fn main() {} // 4
|
||||
|
||||
#[fce] // 5
|
||||
pub fn greeting(name: String) -> String {
|
||||
format!("Hi, {}", name)
|
||||
}
|
||||
```
|
||||
|
||||
Let's go line by line:
|
||||
|
||||
1. Import the [fce](https://github.com/fluencelabs/fce/tree/5effdcba7215cd378f138ab77f27016024720c0e) module from the [Fluence crate](https://crates.io/crates/fluence), which allows us to compile our code to the [wamser32-wasi](https://docs.rs/crate/wasi/0.6.0) target
|
||||
2. Import the [module\_manifest](https://github.com/fluencelabs/rust-sdk/blob/master/crates/main/src/module_manifest.rs), which allows us to embed the SDK version in our module
|
||||
3. Initiate the module\_manifest macro
|
||||
4. Initiate the main function which generally stays empty or is used to instantiate a logger
|
||||
5. Markup the public function we want to expose with the FCE macro which, among other things, checks that only Wasm IT types are used
|
||||
|
||||
Once we compile our code, we generate the wasm32-wasi file, which can be found in the `target/wasm32-wasi` path of your directory. The `greeting.wasm` file is what we need for testing and eventual upload to the peer-to-peer network.
|
||||
|
||||
To make things a little easier on us, let's create a build script, _build.sh_:
|
||||
|
||||
```bash
|
||||
#!/bin/sh
|
||||
# This script builds all sub-projects and puts our Wasm module(s) in a high-level dir
|
||||
|
||||
fce build --release // 1
|
||||
|
||||
mkdir -p artifacts // 2
|
||||
rm artifacts/*
|
||||
cp target/wasm32-wasi/release/greeting.wasm artifacts/ // 3
|
||||
```
|
||||
|
||||
Our script executes the following steps in one handy executable:
|
||||
|
||||
1. Compile the FCE annotated Rust code to the wasm32-wasi target generating the wasm module we so very much desire
|
||||
2. Make a higher-level artifacts directory to hold wasm file\(s\) in a more convenient location
|
||||
3. Copy the wasm build to the artifacts directory
|
||||
|
||||
Before we can run the script we need to `chmod +x build.sh` to make the file executable. Now we can run it:
|
||||
|
||||
```bash
|
||||
./build.sh
|
||||
```
|
||||
|
||||
which starts the build and compilation of the project and eventually, you should see the `greeting.wasm` file in the artifacts directory.
|
||||
|
||||
```bash
|
||||
ll artifacts
|
||||
-rwxr-xr-x 1 bebo staff 81K Mar 15 19:41 greeting.wasm
|
||||
```
|
||||
|
||||
Before we can actually create a service from our module, one more file needs to be added to our project called the service configuration file. Service config files control the order in which the modules are instantiated, their permissions, maximum memory limits and some other parameters. In general, a service configuration file contains:
|
||||
|
||||
* modules\_dir -- the path to the directory with all the Wasm modules
|
||||
* \[\[module\]\] -- a list of modules comprising the service
|
||||
* name -- the \(file\) name of the corresponding Wasm file in the modules\_dir
|
||||
|
||||
For our greeting service, we add the following _Config.toml_ file:
|
||||
|
||||
```text
|
||||
# Config.toml
|
||||
modules_dir = "artifacts/"
|
||||
|
||||
[[module]]
|
||||
name = "greeting"
|
||||
```
|
||||
|
||||
The source code for the module can be found in the [examples repo](https://github.com/fluencelabs/examples/tree/main/greeting).
|
||||
|
||||
## Taking The Greeting Module For A Spin
|
||||
|
||||
Now that we have a Wasm module and service configuration, we can explore and test our achievements locally with the Fluence REPL tool `fce-repl`. Load the service for inspection and testing:
|
||||
|
||||
```bash
|
||||
fce-repl Config.toml
|
||||
|
||||
Welcome to the FCE REPL (version 0.5.2)
|
||||
app service was created with service id = 10afa1aa-22e6-4c8a-b668-6be95d2d3530
|
||||
elapsed time 54.290336ms
|
||||
|
||||
1> interface
|
||||
Loaded modules interface:
|
||||
|
||||
greeting:
|
||||
fn greeting(name: String) -> String
|
||||
|
||||
2>
|
||||
```
|
||||
|
||||
Using our service config file, we loaded the module and associated config info into the `fce-repl` tool and with the `interface` command, we obtain a listing of module name\(s\) and associated interfaces which when can execute in the tool:
|
||||
|
||||
```bash
|
||||
2> call greeting greeting ["Fluence"]
|
||||
result: String("Hi, Fluence")
|
||||
elapsed time: 98.02µs
|
||||
|
||||
3>
|
||||
```
|
||||
|
||||
The _interface_ command lists the available interfaces by module, i.e., the functions we designated as public and marked up with the _FCE_ macro in our source code. For more command info, use the _help_ command:
|
||||
|
||||
```text
|
||||
1> help
|
||||
Commands:
|
||||
|
||||
n/new [config_path] create a new service (current will be removed)
|
||||
l/load <module_name> <module_path> load a new Wasm module
|
||||
u/unload <module_name> unload a Wasm module
|
||||
c/call <module_name> <func_name> [args] call function with given name from given module
|
||||
i/interface print public interface of all loaded modules
|
||||
e/envs <module_name> print environment variables of a module
|
||||
f/fs <module_name> print filesystem state of a module
|
||||
h/help print this message
|
||||
q/quit/Ctrl-C exit
|
||||
```
|
||||
|
||||
The command we'll be using the most is the _call_ command to execute module functions locally as an effective way to test services, such as calling a function with an incorrect type:
|
||||
|
||||
```bash
|
||||
3> call greeting greeting [5]
|
||||
call failed with: JsonArgumentsDeserializationError: error Error("invalid type: integer `5`, expected a string", line: 0, column: 0) occurred while deserialize output result to a json value
|
||||
|
||||
4> call greeting greeting ["5"]
|
||||
result: String("Hi, 5")
|
||||
elapsed time: 61.505µs
|
||||
|
||||
5>
|
||||
```
|
||||
|
||||
The interface `fn greeting(name: String) -> String` specified a string input and an integer input will cause the method to fail. Looks like all is working as designed and expected.
|
||||
|
||||
## Deploying The Greeting Module To A Local Node
|
||||
|
||||
Now that we are reasonably satisfied that our greeting service works, it is time to deploy it to the local network and test it with an AIR script. Before we do that, however, we need configuration files for each of the modules comprising our service. In our greeting service case, we only have one module and our configuration reads as follows:
|
||||
|
||||
```javascript
|
||||
// greeting_cfg.json
|
||||
{
|
||||
"name" : "greeting"
|
||||
}
|
||||
```
|
||||
|
||||
The configuration files are the per-module equivalents of the service configuration file we've seen earlier. They allow nodes to establish the meta-data and permission requirements, per module, before modules are linked to a service. The resulting configuration \(json\) object for a service over the underlying modules. This is called a _blueprint :_
|
||||
|
||||
```text
|
||||
{
|
||||
"id": "uuid-1234-...",
|
||||
"name": "some name",
|
||||
"dependencies": [ "module_a", "module_b", "facade_module" ]
|
||||
}
|
||||
```
|
||||
|
||||
Back to our use case at hand: our config file only needs a name specifier and we are ready to deploy our service to the network or local development node. For detailed information with respect to running a local node, see the [tutorial](https://github.com/boneyard93501/docs/tree/575ff7b260d1014bdaf4d26e791f0ce8f2841d0d/src/tutorials/tutorial_run_local_node.md).
|
||||
|
||||
To run the local node:
|
||||
|
||||
```bash
|
||||
# start the local dev node
|
||||
docker run --rm --name fluence -e RUST_LOG="info" -p 7777:7777 -p 9999:9999 -p 18080 fluencelabs/fluence
|
||||
```
|
||||
|
||||
and pull the node id, _12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17_ in this case, from the log:
|
||||
|
||||
```bash
|
||||
[docker run --rm --name my_fluence -e RUST_LOG="info" -p 7777:7777 -p 9999:9999 -p 18080 fluencelabs/fluence:latest
|
||||
[2021-03-16T21:01:01.347081Z INFO particle_node]
|
||||
+-------------------------------------------------+
|
||||
| Hello from the Fluence Team. If you encounter |
|
||||
| any troubles with node operation, please update |
|
||||
| the node via |
|
||||
| docker pull fluencelabs/fluence:latest |
|
||||
| |
|
||||
| or contact us at |
|
||||
| github.com/fluencelabs/fluence/discussions |
|
||||
+-------------------------------------------------+
|
||||
|
||||
[2021-03-16T21:01:01.347925Z INFO server_config::fluence_config] Loading config from "/.fluence/Config.toml"
|
||||
[2021-03-16T21:01:01.348061Z INFO server_config::keys] generating a new key pair
|
||||
[2021-03-16T21:01:01.348410Z WARN server_config::defaults] New management key generated. private in base64 = SDB6bW/9Vwwy8KvLONkqPwPzaRnb51MzoNkm18fJ790=; peer_id = 12D3KooWCArczSKMzpnyfxKradjE25NEzcfQghkKrtDNuPbsvSU9
|
||||
[2021-03-16T21:01:01.348455Z INFO particle_node] AIR interpreter: "./aquamarine_0.7.5.wasm"
|
||||
[2021-03-16T21:01:01.348608Z INFO particle_node::config::certificates] storing new certificate for the key pair
|
||||
[2021-03-16T21:01:01.348862Z INFO particle_node] public key = FbBMwyYsRvutVSaPNhLYzUyghzHZFewXvmE7SdowNPHB
|
||||
[2021-03-16T21:01:01.350296Z INFO particle_node::node] server peer id = 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
[2021-03-16T21:01:01.353939Z INFO particle_node::node] Fluence listening on ["/ip4/0.0.0.0/tcp/7777", "/ip4/0.0.0.0/tcp/9999/ws"]
|
||||
[2021-03-16T21:01:01.356075Z INFO particle_node] Fluence has been successfully started.
|
||||
[2021-03-16T21:01:01.356098Z INFO particle_node] Waiting for Ctrl-C to exit...
|
||||
[2021-03-16T21:01:01.358364Z INFO tide::server] Server listening on http://0.0.0.0:18080
|
||||
[2021-03-16T21:01:02.067989Z INFO particle_node::network_api] Connected bootstrap 12D3KooWB9P1xmV3c7ZPpBemovbwCiRRTKd3Kq2jsVPQN4ZukDfy @ [/dns4/net10.fluence.dev/tcp/7001]
|
||||
[2021-03-16T21:01:02.068067Z INFO particle_node::network_api] Connected bootstrap 12D3KooWEXNUbCXooUwHrHBbrmjsrpHXoEphPwbjQXEGyzbqKnE9 @ [/dns4/net01.fluence.dev/tcp/7001]
|
||||
<snip>
|
||||
```
|
||||
|
||||
Now we are in a position to deploy our service using the `fldist` tool to the local node. In your project directory, run:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 new_service --ms artifacts/greeting.wasm:greeting_cfg.json -n MyGreeting
|
||||
```
|
||||
|
||||
And if all went well, you should see output similar to:
|
||||
|
||||
```text
|
||||
client seed: 3XUwhqLs7yLHqwE4xnh2C7LitvmT3dFq6Tj1shSRWw1A
|
||||
client peerId: 12D3KooWH2tx7ywW8nvZuGztJMFHhFh16g9fR63BkEQS6QYbG95o
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
uploading blueprint MyGreeting to node 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 via client 12D3KooWH2tx7ywW8nvZuGztJMFHhFh16g9fR63BkEQS6QYbG95o
|
||||
service id: 9712f9ca-7dfd-4ff5-817d-aef9e1e92e03
|
||||
service created successfully
|
||||
```
|
||||
|
||||
Which not only confirms the success of our operation but also gives us the _service id, 9712f9ca-7dfd-4ff5-817d-aef9e1e92e03_ in this case. We can further check the success of our operation by checking the installed modules on our local node with `fldist get_modules`:
|
||||
|
||||
```text
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 get_modules
|
||||
client seed: AgZjbuMvZmCWbqZBABXXtv3cjGTqYFfiVj7aqg8dm2fA
|
||||
client peerId: 12D3KooWFhUMisVC2VtXAertXt5oQQ7Xj1qppFZRM4mvEQ1iUaBP
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
[{"config":{"logger_enabled":true,"logging_mask":null,"mem_pages_count":100,"mounted_binaries":null,"wasi":{"envs":null,"mapped_dirs":null,"preopened_files":[]}},"hash":"c8aec6cbbc0a9632bf532b9553092ae6f66d2e3a5f71e11d1fe65e423c2204e2","name":"greeting"},{"config":{"logger_enabled":true,"logging_mask":null,"mem_pages_count":100,"mounted_binaries":null,"wasi":{"envs":null,"mapped_dirs":null,"preopened_files":[]}},"hash":"915d7487d4ae99f6136a7fe053c4ebd52cde1650c47492a315287117cedd0d3a","name":"greeting"}]
|
||||
```
|
||||
|
||||
Which confirms our recent upload!!
|
||||
|
||||
Now that we have a service on our local node, we need to construct our AIR script to build our frontend.
|
||||
|
||||
```text
|
||||
(xor
|
||||
(seq
|
||||
(call relay (service "greeting") [name] result)
|
||||
(call %init_peer_id% (returnService "run") [result])
|
||||
)
|
||||
(call %init_peer_id% (returnService "run") [%last_error%])
|
||||
)
|
||||
```
|
||||
|
||||
As we've seen in the Quick Start section, we call the service _"greeting"_ with service id _service_ and the method parameter _name_. As usual, we use the `fldist` tool to execute the AIR script:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p greeting.clj -d '{"service":"9712f9ca-7dfd-4ff5-817d-aef9e1e92e03", "name": "Fluence"}'
|
||||
```
|
||||
|
||||
Giving us the expected response:
|
||||
|
||||
```bash
|
||||
client seed: EV3bFK7mnqk58HrssTfCdXeYSzrVeiTzxWmh2B7k2g6R
|
||||
client peerId: 12D3KooWLYtUhCj392W8XMhToCVrrsjowVdLirBzNHkEqCDmpe17
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 3dbbdfa6-7401-438d-89b9-53413b0022e4. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
"Hi, Fluence"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '9712f9ca-7dfd-4ff5-817d-aef9e1e92e03',
|
||||
function_name: 'greeting',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
And that's a wrap.
|
||||
|
||||
## Summary
|
||||
|
||||
In this section we worked through the various requisites and requirements to develop modules and services. To recap:
|
||||
|
||||
1. Create a Rust bin project and update the Cargo.toml to reflect our binary target
|
||||
2. Mark public module functions with the _FCE_ macro
|
||||
3. Build and compile the project with the `fce` tool
|
||||
4. Create a service config toml file to specify wasm file location, included modules, module permissions and more
|
||||
5. Use `fce-repl` to inspect and test modules and services
|
||||
6. Create a deployment json config file for each module for service deployment
|
||||
7. Deploy a service with `fldist` tool
|
||||
8. Execute the service with an AIR script from the `fldst` command-line tool
|
||||
|
@ -1,10 +0,0 @@
|
||||
# Summary
|
||||
|
||||
In the Quick Start sections, we tapped into the power of reuse and utilized already existing services. In this section, we developed the various modules and services from scratch using the Fluence tools, local node and Fluence testnet.
|
||||
|
||||
The further your understanding, please checkout the remainder of the documentation.
|
||||
|
||||
* The [Knowledgebase](../knowledge_knowledge/) section covers the various aspects and components of the Fluence solution in more detail.
|
||||
* The [Tutorials](../tutorials_tutorials/) section offers a a set of how-to's ranging from node and network configuration to fairly large-scale application development.
|
||||
* The [Recipes](../recipes_recipes/) section provides a \(growing\) set of pattern and pattern-like best practices and approaches.
|
||||
|
@ -1,8 +0,0 @@
|
||||
# Aquamarine
|
||||
|
||||
Fluence's Aquamarine stack is comprised of Aqua and Marine. Aqua is a programming language and runtime environment for peer-to-peer workflows. Marine, on the other hand, is a general purpose runtime that executes hosted code on nodes, whereas Aqua facilitates the programming of workflows composed from hosted code. In combination, Aqua and Marine enable any distributed application.
|
||||
|
||||
At the core of Aqua is the design ideal and idea to pair concurrent systems, and especially decentralized networks, with a programming and execution tool chain to avoid centralized bottlenecks commonly introduced with workflow engines and business rule engines. To this end, Aqua manages the communication and coordination between services, devices, and APIs without introducing a centralized gateway and can be used to express various distributed systems: from simple request-response models to comprehensive network consensus algorithms.
|
||||
|
||||
|
||||
|
@ -1,6 +0,0 @@
|
||||
# Aqua
|
||||
|
||||
At the core of Fluence is the open-source language **Aqua** that allows for the programming of peer-to-peer scenarios separately from the computations on peers.
|
||||
|
||||
Please see the[ Aqua book ](https://doc.fluence.dev/aqua-book/)for an introduction to the language and reference materials.
|
||||
|
@ -1,6 +0,0 @@
|
||||
# Aqua
|
||||
|
||||
## Aquamarine High Level Language
|
||||
|
||||
_**Stay Tuned -- Coming Soon To A Repo Near You**_
|
||||
|
@ -1,55 +0,0 @@
|
||||
# AIR
|
||||
|
||||
The Aquamarine Intermediate Representation \(AIR\) is a low level language to program both distributed networks and the services deployed on them. The language is comprised of a small number of instructions:
|
||||
|
||||
* _**call**_: : execution
|
||||
* _**seq**_ : sequential
|
||||
* _**par** :_ parallel
|
||||
* _**fold**_ : iteration
|
||||
* _**xor** :_ branching & error handling
|
||||
* _**null**_ : empty instruction
|
||||
|
||||
which operate on _peer-id_ \(location\), _service-id_, and _service method_ over an argument list, see Figure 1.
|
||||
|
||||
**Figure 1: AIR Instruction Definition** ![Execution](../../.gitbook/assets/air_call_execution_1.png)
|
||||
|
||||
## Instructions
|
||||
|
||||
AIR instructions are intended to launch the execution of a service method as follows:
|
||||
|
||||
1. The method is executed on the peer specified by the peer id \(location\) parameter
|
||||
2. The peer is expected to have the Wasm service specified by the service id parameter
|
||||
3. The service must have a callable method specified be the method parameter
|
||||
4. The arguments specified by the argument list are passed to the method
|
||||
5. The result of the method returned under the name output name
|
||||
|
||||
**Figure 2: Sequential Instruction** ![Execution](../../.gitbook/assets/air_sequential_2%20%281%29%20%281%29%20%281%29%20%281%29%20%281%29%20%282%29%20%283%29%20%284%29%20%284%29%20%284%29%20%281%29.png)
|
||||
|
||||
The _**seq**_ instruction takes two instructions at most as its arguments and executes them sequentially, one after the other.
|
||||
|
||||
**Figure 3: Parallel Instruction** ![Execution](../../.gitbook/assets/air_par_3.png)
|
||||
|
||||
The _**par**_ instruction takes two instructions at most as its arguments and particles may execute on parallel paths iff each service referenced is hosted on a different node otherwise particles execute sequentially
|
||||
|
||||
TODO: add better graphic showing the disticntion of branching vs seq.
|
||||
|
||||
**Figure 4: Fold Instruction** ![Execution](https://github.com/fluencelabs/gitbook-docs/tree/84e814d02d9299034c9c031adf7f081bb59898b9/.gitbook/assets/air_fold_4%20%281%29%20%282%29%20%281%29.png)
|
||||
|
||||
The _**fold**_ instruction iterates over the elements of an array and workds as follows:
|
||||
|
||||
* _**fold**_ instruction takes three arguments: an array, a variable and an instruction
|
||||
* At each iteration, the variable is assigned an element of the array and the argument-instruction is executed
|
||||
* The argument-instruction can access the variable and uses the next statement to trigger the next iteration
|
||||
|
||||
Figure 5: Branching Instruction ![Execution](../../.gitbook/assets/air_xor_5.png)
|
||||
|
||||
This instruction is intended for organizing branches in the flow of execution as well as for handling errors:
|
||||
|
||||
* The _**XOR**_ instruction takes two instructions as its arguments
|
||||
* The first instruction is executed and if the execution is successful, then the second instruction is ignored
|
||||
* If the first instruction fails, then the second one is executed.
|
||||
|
||||
**Figure 6: Null Instruction** ![Execution](https://github.com/fluencelabs/gitbook-docs/tree/84e814d02d9299034c9c031adf7f081bb59898b9/.gitbook/assets/air_null_6%20%281%29%20%282%29.png)
|
||||
|
||||
This is an empty instruction: it takes no arguments and does nothing. The _**null**_ instruction is useful for generating code.
|
||||
|
@ -1,2 +0,0 @@
|
||||
# Aqua VM
|
||||
|
@ -1,14 +0,0 @@
|
||||
# Marine
|
||||
|
||||
[Marine](https://github.com/fluencelabs/marine) is a general purpose WebAssembly runtime favoring Wasm modules based on the [ECS](https://en.wikipedia.org/wiki/Entity_component_system) pattern or plugin architecture and uses Wasm [Interface Types](https://github.com/WebAssembly/interface-types/blob/master/proposals/interface-types/Explainer.mdhttps://github.com/WebAssembly/interface-types/blob/master/proposals/interface-types/Explainer.md) \( IT\) to implement a [shared-nothing](https://en.wikipedia.org/wiki/Shared-nothing_architecture) linking scheme. Fluence [nodes](https://github.com/fluencelabs/fluence) use Marine to host the Aqua VM and execute hosted Wasm services.
|
||||
|
||||
The [Marine Rust SDK](https://github.com/fluencelabs/marine-rs-sdk) allows to hide the IT implementation details behind a handy procedural macro `[marine]` and provides the scaffolding for unit tests.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,35 +0,0 @@
|
||||
# Marine CLI
|
||||
|
||||
The [Marine command line tool](https://github.com/fluencelabs/marine) provides the project `marine build` functionality, analogous to `cargo build`, that results in the Rust code to be compiled to _wasm32-wasi_ modules. In addition, `marine` provides utilities to inspect Wasm modules, expose Wasm module attributes or manually set module properties.
|
||||
|
||||
```rust
|
||||
mbp16~(:|✔) % marine --help
|
||||
Fluence Marine command line tool 0.6.7
|
||||
Fluence Labs
|
||||
|
||||
USAGE:
|
||||
marine [SUBCOMMAND]
|
||||
|
||||
FLAGS:
|
||||
-h, --help Prints help information
|
||||
-V, --version Prints version information
|
||||
|
||||
SUBCOMMANDS:
|
||||
aqua Shows data types of provided module in a format suitable for Aqua
|
||||
build Builds provided Rust project to Wasm
|
||||
help Prints this message or the help of the given subcommand(s)
|
||||
info Shows manifest and sdk version of the provided Wasm file
|
||||
it Shows IT of the provided Wasm file
|
||||
repl Starts Fluence application service REPL
|
||||
set Sets interface types and version to the provided Wasm file
|
||||
mbp16~(:|✔) %
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,35 +0,0 @@
|
||||
# Marine Repl
|
||||
|
||||
[`mrepl`](https://crates.io/crates/mrepl) is a command line tool to locally run a Marine instance to inspect, run, and test Wasm modules and service configurations. We can run the Repl either with `mrepl` or `marine repl`
|
||||
|
||||
```text
|
||||
mbp16~(:|✔) % mrepl
|
||||
Welcome to the Marine REPL (version 0.7.2)
|
||||
Minimal supported versions
|
||||
sdk: 0.6.0
|
||||
interface-types: 0.20.0
|
||||
|
||||
New version is available! 0.7.2 -> 0.7.4
|
||||
To update run: cargo +nightly install mrepl --force
|
||||
|
||||
app service was created with service id = d81a4de5-55c3-4cb7-935c-3d5c6851320d
|
||||
elapsed time 486.234µs
|
||||
|
||||
1> help
|
||||
Commands:
|
||||
|
||||
n/new [config_path] create a new service (current will be removed)
|
||||
l/load <module_name> <module_path> load a new Wasm module
|
||||
u/unload <module_name> unload a Wasm module
|
||||
c/call <module_name> <func_name> [args] call function with given name from given module
|
||||
i/interface print public interface of all loaded modules
|
||||
e/envs <module_name> print environment variables of a module
|
||||
f/fs <module_name> print filesystem state of a module
|
||||
h/help print this message
|
||||
q/quit/Ctrl-C exit
|
||||
|
||||
2>
|
||||
```
|
||||
|
||||
|
||||
|
@ -1,531 +0,0 @@
|
||||
# Marine Rust SDK
|
||||
|
||||
The [marine-rs-sdk](https://github.com/fluencelabs/marine-rs-sdk) empowers developers to write services suitable for peer hosting in peer-to-peer networks using the Marine Virtual Machine by enabling the wasm32-wasi compile target for Marine. For an introduction to writing services with the marine-rs-sdk, see the [Developing Modules And Services]() section.
|
||||
|
||||
### API
|
||||
|
||||
The procedural macros `[marine]` and `[marine_test]` are the two primary features provided by the SDK. The `[marine]` macro can be applied to a function, external block or structure. The `[marine_test]` macro, on the other hand, allows the use of the familiar `cargo test` to execute tests over the actual Wasm module generated from the service code.
|
||||
|
||||
#### Function Export
|
||||
|
||||
Applying the `[marine]` macro to a function results in its export, which means that it can be called from other modules or AIR scripts. For the function to be compatible with this macro, its arguments must be of the `ftype`, which is defined as follows:
|
||||
|
||||
`ftype` = `bool`, `u8`, `u16`, `u32`, `u64`, `i8`, `i16`, `i32`, `i64`, `f32`, `f64`, `String`
|
||||
`ftype` = `ftype` \| `Vec`<`ftype`>
|
||||
`ftype` = `ftype` \| `Record`<`ftype`>
|
||||
|
||||
In other words, the arguments must be one of the types listed below:
|
||||
|
||||
* one of the following Rust basic types: `bool`, `u8`, `u16`, `u32`, `u64`, `i8`, `i16`, `i32`, `i64`, `f32`, `f64`, `String`
|
||||
* a vector of elements of the above types
|
||||
* a vector composed of vectors of the above type, where recursion is acceptable, e.g. the type `Vec<Vec<Vec<u8>>>` is permissible
|
||||
* a record, where all fields are of the basic Rust types
|
||||
* a record, where all fields are of any above types or other records
|
||||
|
||||
The return type of a function must follow the same rules, but currently only one return type is possible.
|
||||
|
||||
See the example below of an exposed function with a complex type signature and return value:
|
||||
|
||||
```rust
|
||||
// export TestRecord as a public data structure bound by
|
||||
// the IT type constraints
|
||||
#[marine]
|
||||
pub struct TestRecord {
|
||||
pub field_0: i32,
|
||||
pub field_1: Vec<Vec<u8>>,
|
||||
}
|
||||
|
||||
// export foo as a public function bound by the
|
||||
// IT type contraints
|
||||
#[marine] #
|
||||
pub fn foo(arg_1: Vec<Vec<Vec<Vec<TestRecord>>>>, arg_2: String) -> Vec<Vec<Vec<Vec<TestRecord>>>> {
|
||||
unimplemented!()
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
{% hint style="info" %}
|
||||
Function Export Requirements
|
||||
|
||||
* wrap a target function with the `[marine]` macro
|
||||
* function arguments must by of `ftype`
|
||||
* the function return type also must be of `ftype`
|
||||
{% endhint %}
|
||||
|
||||
#### Function Import
|
||||
|
||||
The `[marine]` macro can also wrap an [`extern` block](https://doc.rust-lang.org/std/keyword.extern.html). In this case, all functions declared in it are considered imported functions. If there are imported functions in some module, say, module A, then:
|
||||
|
||||
* There should be another module, module B, that exports the same functions. The name of module B is indicated in the `link` macro \(see examples below\).
|
||||
* Module B should be loaded to `Marine` by the moment the loading of module A starts. Module A cannot be loaded if at least one imported function is absent in `Marine`.
|
||||
|
||||
See the examples below for wrapped `extern` block usage:
|
||||
|
||||
{% tabs %}
|
||||
{% tab title="Example 1" %}
|
||||
```rust
|
||||
#[marine]
|
||||
pub struct TestRecord {
|
||||
pub field_0: i32,
|
||||
pub field_1: Vec<Vec<u8>>,
|
||||
}
|
||||
|
||||
// wrap the extern block with the marine macro to expose the function
|
||||
// as an import to the Marine VM
|
||||
#[marine]
|
||||
#[link(wasm_import_module = "some_module")]
|
||||
extern "C" {
|
||||
pub fn foo(arg: Vec<Vec<Vec<Vec<TestRecord>>>>, arg_2: String) -> Vec<Vec<Vec<Vec<TestRecord>>>>;
|
||||
}
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
{% tab title="Example 2" %}
|
||||
```rust
|
||||
[marine]
|
||||
#[link(wasm_import_module = "some_module")]
|
||||
extern "C" {
|
||||
pub fn foo(arg: Vec<Vec<Vec<Vec<u8>>>>) -> Vec<Vec<Vec<Vec<u8>>>>;
|
||||
}
|
||||
```
|
||||
{% endtab %}
|
||||
{% endtabs %}
|
||||
|
||||
|
||||
|
||||
{% hint style="info" %}
|
||||
|
||||
|
||||
#### Function import requirements
|
||||
|
||||
* wrap an extern block with the function\(s\) to be imported with the `[marine]` macro
|
||||
* all function\(s\) arguments must be of the `ftype` type
|
||||
* the return type of the function\(s\) must be `ftype`
|
||||
{% endhint %}
|
||||
|
||||
####
|
||||
|
||||
#### Structures
|
||||
|
||||
Finally, the `[marine]` macro can wrap a `struct` making possible to use it as a function argument or return type. Note that
|
||||
|
||||
* only macro-wrapped structures can be used as function arguments and return types
|
||||
* all fields of the wrapped structure must be public and of the `ftype`.
|
||||
* it is possible to have inner records in the macro-wrapped structure and to import wrapped structs from other crates
|
||||
|
||||
See the example below for wrapping `struct`:
|
||||
|
||||
{% tabs %}
|
||||
{% tab title="Example 1" %}
|
||||
```rust
|
||||
#[marine]
|
||||
pub struct TestRecord0 {
|
||||
pub field_0: i32,
|
||||
}
|
||||
|
||||
#[marine]
|
||||
pub struct TestRecord1 {
|
||||
pub field_0: i32,
|
||||
pub field_1: String,
|
||||
pub field_2: Vec<u8>,
|
||||
pub test_record_0: TestRecord0,
|
||||
}
|
||||
|
||||
#[marine]
|
||||
pub struct TestRecord2 {
|
||||
pub test_record_0: TestRecord0,
|
||||
pub test_record_1: TestRecord1,
|
||||
}
|
||||
|
||||
#[marine]
|
||||
fn foo(mut test_record: TestRecord2) -> TestRecord2 { unimplemented!(); }
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
{% tab title="Example 2" %}
|
||||
```rust
|
||||
#[fce]
|
||||
pub struct TestRecord0 {
|
||||
pub field_0: i32,
|
||||
}
|
||||
|
||||
#[fce]
|
||||
pub struct TestRecord1 {
|
||||
pub field_0: i32,
|
||||
pub field_1: String,
|
||||
pub field_2: Vec<u8>,
|
||||
pub test_record_0: TestRecord0,
|
||||
}
|
||||
|
||||
#[fce]
|
||||
pub struct TestRecord2 {
|
||||
pub test_record_0: TestRecord0,
|
||||
pub test_record_1: TestRecord1,
|
||||
}
|
||||
|
||||
#[fce]
|
||||
#[link(wasm_import_module = "some_module")]
|
||||
extern "C" {
|
||||
fn foo(mut test_record: TestRecord2) -> TestRecord2;
|
||||
}
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
{% tab title="Example 3" %}
|
||||
```rust
|
||||
mod data_crate {
|
||||
use fluence::marine;
|
||||
#[marine]
|
||||
pub struct Data {
|
||||
pub name: String,
|
||||
pub data: f64,
|
||||
}
|
||||
}
|
||||
|
||||
use data_crate::Data;
|
||||
use fluence::marine;
|
||||
|
||||
fn main() {}
|
||||
|
||||
#[marine]
|
||||
fn some_function() -> Data {
|
||||
Data {
|
||||
name: "example".into(),
|
||||
data: 1.0,
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
{% endtabs %}
|
||||
|
||||
|
||||
|
||||
{% hint style="info" %}
|
||||
|
||||
|
||||
> #### Structure passing requirements
|
||||
>
|
||||
> * wrap a structure with the `[marine]` macro
|
||||
> * all structure fields must be of the `ftype`
|
||||
> * the structure must be pointed to without preceding package import in a function signature, i.e`StructureName` but not `package_name::module_name::StructureName`
|
||||
> * wrapped structs can be imported from crates
|
||||
{% endhint %}
|
||||
|
||||
####
|
||||
|
||||
#### Call Parameters
|
||||
|
||||
There is a special API function `fluence::get_call_parameters()` that returns an instance of the [`CallParameters`](https://github.com/fluencelabs/marine-rs-sdk/blob/master/fluence/src/call_parameters.rs#L35) structure defined as follows:
|
||||
|
||||
```rust
|
||||
pub struct CallParameters {
|
||||
/// Peer id of the AIR script initiator.
|
||||
pub init_peer_id: String,
|
||||
|
||||
/// Id of the current service.
|
||||
pub service_id: String,
|
||||
|
||||
/// Id of the service creator.
|
||||
pub service_creator_peer_id: String,
|
||||
|
||||
/// Id of the host which run this service.
|
||||
pub host_id: String,
|
||||
|
||||
/// Id of the particle which execution resulted a call this service.
|
||||
pub particle_id: String,
|
||||
|
||||
/// Security tetraplets which described origin of the arguments.
|
||||
pub tetraplets: Vec<Vec<SecurityTetraplet>>,
|
||||
}
|
||||
```
|
||||
|
||||
CallParameters are especially useful in constructing authentication services:
|
||||
|
||||
```text
|
||||
// auth.rs
|
||||
use fluence::{marine, CallParameters};
|
||||
use::marine;
|
||||
|
||||
pub fn is_owner() -> bool {
|
||||
let meta = marine::get_call_parameters();
|
||||
let caller = meta.init_peer_id;
|
||||
let owner = meta.service_creator_peer_id;
|
||||
|
||||
caller == owner
|
||||
}
|
||||
|
||||
#[marine]
|
||||
pub fn am_i_owner() -> bool {
|
||||
is_owner()
|
||||
}
|
||||
```
|
||||
|
||||
####
|
||||
|
||||
#### MountedBinaryResult
|
||||
|
||||
Due to the inherent limitations of Wasm modules, such as a lack of sockets, it may be necessary for a module to interact with its host to bridge such gaps, e.g. use a https transport provider like _curl_. In order for a Wasm module to use a host's _curl_ capabilities, we need to provide access to the binary, which at the code level is achieved through the Rust `extern` block:
|
||||
|
||||
```rust
|
||||
// Importing a linked binary, curl, to a Wasm module
|
||||
#![allow(improper_ctypes)]
|
||||
|
||||
use fluence::marine;
|
||||
use fluence::module_manifest;
|
||||
use fluence::MountedBinaryResult;
|
||||
|
||||
module_manifest!();
|
||||
|
||||
pub fn main() {}
|
||||
|
||||
#[marine]
|
||||
pub fn curl_request(curl_cmd: Vec<String>) -> MountedBinaryResult {
|
||||
let response = curl(curl_cmd);
|
||||
response
|
||||
}
|
||||
|
||||
#[marine]
|
||||
#[link(wasm_import_module = "host")]
|
||||
extern "C" {
|
||||
fn curl(cmd: Vec<String>) -> MountedBinaryResult;
|
||||
}
|
||||
```
|
||||
|
||||
The above code creates a "curl adapter", i.e., a Wasm module that allows other Wasm modules to use the the `curl_request` function, which calls the imported _curl_ binary in this case, to make http calls. Please note that we are wrapping the `extern` block with the `[marine]`macro and introduce a Marine-native data structure [`MountedBinaryResult`](https://github.com/fluencelabs/marine/blob/master/examples/url-downloader/curl_adapter/src/main.rs) as the linked-function return value.
|
||||
|
||||
Please not that if you want to use `curl_request` with testing, see below, the curl call needs to be marked unsafe, e.g.:
|
||||
|
||||
```rust
|
||||
let response = unsafe { curl(curl_cmd) };
|
||||
```
|
||||
|
||||
since cargo does not access to the marine macro to handle unsafe.
|
||||
|
||||
MountedBinaryResult itself is a Marine-compatible struct containing a binary's return process code, error string and stdout and stderr as byte arrays:
|
||||
|
||||
```rust
|
||||
#[marine]
|
||||
#[derive(Clone, PartialEq, Default, Eq, Debug, Serialize, Deserialize)]
|
||||
pub struct MountedBinaryResult {
|
||||
/// Return process exit code or host execution error code, where SUCCESS_CODE means success.
|
||||
pub ret_code: i32,
|
||||
|
||||
/// Contains the string representation of an error, if ret_code != SUCCESS_CODE.
|
||||
pub error: String,
|
||||
|
||||
/// The data that the process wrote to stdout.
|
||||
pub stdout: Vec<u8>,
|
||||
|
||||
/// The data that the process wrote to stderr.
|
||||
pub stderr: Vec<u8>,
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
MountedBinaryResult then can be used on a variety of match or conditional tests.
|
||||
|
||||
#### Testing
|
||||
|
||||
Since we are compiling to a wasm32-wasi target with `ftype` constrains, the basic `cargo test` is not all that useful or even usable for our purposes. To alleviate that limitation, Fluence has introduced the [`[marine-test]` macro ](https://github.com/fluencelabs/marine-rs-sdk/tree/master/crates/marine-test-macro)that does a lot of the heavy lifting to allow developers to use `cargo test` as intended. That is, `[marine-test]` macro generates the necessary code to call Marine, one instance per test function, based on the Wasm module and associated configuration file so that the actual test function is run against the Wasm module not the native code.
|
||||
|
||||
Let's have a look at an implementation example:
|
||||
|
||||
```rust
|
||||
use fluence::marine;
|
||||
use fluence::module_manifest;
|
||||
|
||||
module_manifest!();
|
||||
|
||||
pub fn main() {}
|
||||
|
||||
#[marine]
|
||||
pub fn greeting(name: String) -> String { # 1
|
||||
format!("Hi, {}", name)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use fluence_test::marine_test; # 2
|
||||
|
||||
#[marine_test(config_path = "../Config.toml", modules_dir = "../artifacts")] # 3
|
||||
fn empty_string() {
|
||||
let actual = greeting.greeting(String::new()); # 4
|
||||
assert_eq!(actual, "Hi, ");
|
||||
}
|
||||
|
||||
#[marine_test(config_path = "../Config.toml", modules_dir = "../artifacts")]
|
||||
fn non_empty_string() {
|
||||
let actual = greeting.greeting("name".to_string());
|
||||
assert_eq!(actual, "Hi, name");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
1. We wrap a basic _greeting_ function with the `[marine`\] macro which results in the greeting.wasm module
|
||||
2. We wrap our tests as usual with `[cfg(test)]` and import the fluence_test crate._ Do **not** import _super_ or the _local crate_.
|
||||
3. Instead, we apply the `[marine_test]` to each of the test functions by providing the path to the config file, e.g., Config.toml, and the directory containing the Wasm module we obtained after compiling our project with `marine build`. It is imperative that project compilation proceeds the test runner otherwise there won't be the required Wasm file.
|
||||
4. The target of our tests is the `pub fn greeting` function. Since we are calling the function from the Wasm module we must prefix the function name with the module namespace -- `greeting` in this example case.
|
||||
|
||||
Now that we have our Wasm module and tests in place, we can proceed with `cargo test --release.` Note that using the `release`vastly improves the import speed of the necessary Wasm modules.
|
||||
|
||||
### Features
|
||||
|
||||
The SDK has two useful features: `logger` and `debug`.
|
||||
|
||||
#### Logger
|
||||
|
||||
Using logging is a simple way to assist in debugging without deploying the module\(s\) to a peer-to-peer network node. The `logger` feature allows you to use a special logger that is based at the top of the [log](https://crates.io/crates/log) crate.
|
||||
|
||||
To enable logging please specify the `logger` feature of the Fluence SDK in `Config.toml` and add the [log](https://docs.rs/log/0.4.11/log/) crate:
|
||||
|
||||
```rust
|
||||
[dependencies]
|
||||
log = "0.4.14"
|
||||
fluence = { version = "0.6.9", features = ["logger"] }
|
||||
```
|
||||
|
||||
The logger should be initialized before its usage. This can be done in the `main` function as shown in the example below.
|
||||
|
||||
```rust
|
||||
use fluence::marine;
|
||||
use fluence::WasmLogger;
|
||||
|
||||
pub fn main() {
|
||||
WasmLogger::new()
|
||||
// with_log_level can be skipped,
|
||||
// logger will be initialized with Info level in this case.
|
||||
.with_log_level(log::Level::Info)
|
||||
.build()
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[marine]
|
||||
pub fn put(name: String, file_content: Vec<u8>) -> String {
|
||||
log::info!("put called with file name {}", file_name);
|
||||
unimplemented!()
|
||||
}
|
||||
```
|
||||
|
||||
In addition to the standard log creation features, the Fluence logger allows the so-called target map to be configured during the initialization step. This allows you to filter out logs by `logging_mask`, which can be set for each module in the service configuration. Let's consider an example:
|
||||
|
||||
```rust
|
||||
const TARGET_MAP: [(&str, i64); 4] = [
|
||||
("instruction", 1 << 1),
|
||||
("data_cache", 1 << 2),
|
||||
("next_peer_pks", 1 << 3),
|
||||
("subtree_complete", 1 << 4),
|
||||
];
|
||||
|
||||
pub fn main() {
|
||||
use std::collections::HashMap;
|
||||
use std::iter::FromIterator;
|
||||
|
||||
let target_map = HashMap::from_iter(TARGET_MAP.iter().cloned());
|
||||
|
||||
fluence::WasmLogger::new()
|
||||
.with_target_map(target_map)
|
||||
.build()
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[marine]
|
||||
pub fn foo() {
|
||||
log::info!(target: "instruction", "this will print if (logging_mask & 1) != 0");
|
||||
log::info!(target: "data_cache", "this will print if (logging_mask & 2) != 0");
|
||||
}
|
||||
```
|
||||
|
||||
Here, an array called `TARGET_MAP` is defined and provided to a logger in the `main` function of a module. Each entry of this array contains a string \(a target\) and a number that represents the bit position in the 64-bit mask `logging_mask`. When you write a log message request `log::info!`, its target must coincide with one of the strings \(the targets\) defined in the `TARGET_MAP` array. The log will be printed if `logging_mask` for the module has the corresponding target bit set.
|
||||
|
||||
{% hint style="info" %}
|
||||
REPL also uses the log crate to print logs from Wasm modules. Log messages will be printed if`RUST_LOG` environment variable is specified.
|
||||
{% endhint %}
|
||||
|
||||
#### Debug
|
||||
|
||||
The application of the second feature is limited to obtaining some of the internal details of the IT execution. Normally, this feature should not be used by a backend developer. Here you can see example of such details for the greeting service compiled with the `debug` feature:
|
||||
|
||||
```bash
|
||||
# running the greeting service compiled with debug feature
|
||||
~ $ RUST_LOG="info" fce-repl Config.toml
|
||||
Welcome to the Fluence FaaS REPL
|
||||
app service's created with service id = e5cfa463-ff50-4996-98d8-4eced5ac5bb9
|
||||
elapsed time 40.694769ms
|
||||
|
||||
1> call greeting greeting "user"
|
||||
[greeting] sdk.allocate: 4
|
||||
[greeting] sdk.set_result_ptr: 1114240
|
||||
[greeting] sdk.set_result_size: 8
|
||||
[greeting] sdk.get_result_ptr, returns 1114240
|
||||
[greeting] sdk.get_result_size, returns 8
|
||||
[greeting] sdk.get_result_ptr, returns 1114240
|
||||
[greeting] sdk.get_result_size, returns 8
|
||||
[greeting] sdk.deallocate: 0x110080 8
|
||||
|
||||
result: String("Hi, user")
|
||||
elapsed time: 222.675µs
|
||||
```
|
||||
|
||||
The most important information these logs relates to the `allocate`/`deallocate` function calls. The `sdk.allocate: 4` line corresponds to passing the 4-byte `user` string to the Wasm module, with the memory allocated inside the module and the string is copied there. Whereas `sdk.deallocate: 0x110080 8` refers to passing the 8-byte resulting string `Hi, user` to the host side. Since all arguments and results are passed by value, `deallocate` is called to delete unnecessary memory inside the Wasm module.
|
||||
|
||||
#### Module Manifest
|
||||
|
||||
The `module_manifest!` macro embeds the Interface Type \(IT\), SDK and Rust project version as well as additional project and build information into Wasm module. For the macro to be usable, it needs to be imported and initialized in the _main.rs_ file:
|
||||
|
||||
```text
|
||||
// main.rs
|
||||
use fluence::marine;
|
||||
use fluence::module_manifest; // import manifest macro
|
||||
|
||||
module_manifest!(); // initialize macro
|
||||
|
||||
fn main() {}
|
||||
|
||||
#[marine]
|
||||
fn some_function() {}
|
||||
}
|
||||
```
|
||||
|
||||
Using the Marine CLI, we can inspect a module's manifest with `marine info`:
|
||||
|
||||
```rust
|
||||
mbp16~/localdev/struct-exp(main|…) % marine info -i artifacts/*.wasm
|
||||
it version: 0.20.1
|
||||
sdk version: 0.6.0
|
||||
authors: The Fluence Team
|
||||
version: 0.1.0
|
||||
description: foo-wasm, a Marine wasi module
|
||||
repository:
|
||||
build time: 2021-06-11 21:08:59.855352 +00:00 UTC
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,22 +0,0 @@
|
||||
# Aquamarine
|
||||
|
||||
Aquamarine is a programming language and executable choreography tool for distributed applications and backends. Aquamarine manages the communication and coordination between services, devices, and APIs without introducing any centralized gateway and can be used to express various distributed systems: from simple request-response to comprehensive network consensus algorithms.
|
||||
|
||||
At the core of Aquamarine is the design ideal and idea to pair concurrent systems, and especially decentralized networks, with a programing and execution tool chain to avoid centralized bottlenecks commonly introduced with [workflow engines](https://en.wikipedia.org/wiki/Workflow_engine) and [Business rule engines](https://en.wikipedia.org/wiki/Business_rules_engine). This not only makes Aquamarine the rosetta stone of the Fluence solution but also a very powerful generic coordination and composition medium.
|
||||
|
||||
## Background
|
||||
|
||||
When we build systems, we need to be able to model, specify, analyze and verify them and this is especially important to concurrent systems such as parallel and multi-threaded systems. [Formal specification](https://en.wikipedia.org/wiki/Formal_specification) are a family of formal approaches to design, model, and verify system. In the context of concurrent systems, there are two distinct formal specification techniques available. The state oriented approach is concerned with modeling verifying a systems state and state transitions and is often accomplished with [TLA+](https://en.wikipedia.org/wiki/TLA%2B). Modern blockchain design, modeling, and verification tend to rely on a state-based specification.
|
||||
|
||||
An alternative, complementary approach is based on [Process calculus](https://en.wikipedia.org/wiki/Process_calculus) to model and verify the sequence of communications operations of a system at any given time. [π-Calculs](https://en.wikipedia.org/wiki/%CE%A0-calculus) is a modern process calculus employed in a wide range of applications ranging from biology to games and business processes.
|
||||
|
||||
Aquamarine, Fluence's distributed composition language and runtime, is based on π-calculus and provides a solid theoretical basis toward the design, modeling, implementation, and verification of a wide class of distributed, peer-to-peer networks, applications and backends.
|
||||
|
||||
## Language
|
||||
|
||||
[Aquamarine Intermediate Representation](https://github.com/boneyard93501/docs/tree/a512080f81137fb575a5b96d3f3e83fa3044fd1c/src/knowledge-base/knowledge_aquamarine__air.md) \(AIR\) is a low-level language modeled after the [WebAssembly text format](https://developer.mozilla.org/en-US/docs/WebAssembly/Understanding_the_text_format) and allows developers to manage network peers as well as services and backends. AIR, while intended as a compile target, is currently the only Aquamarine language implementation although a high level language \(HLL\) is currently under active development.
|
||||
|
||||
## Runtime
|
||||
|
||||
The Aquamarine runtime is a virtual machine executed by the [Fluence Compute Engine](https://github.com/boneyard93501/docs/tree/a512080f81137fb575a5b96d3f3e83fa3044fd1c/src/knowledge-base/knowledge_fce.md) \(FCE\) running not only on every Fluence network peer but also on every frontend client. The distributed runtime availability per node not only aids in decentralized service discovery and execution at the same level of decentralization of the network, which is of significant importance. Moreover, with running execution scripts on both the client and the \(remote\) nodes, a high degree of auditability and verifiability can be attained.
|
||||
|
@ -1,6 +0,0 @@
|
||||
# Aqua
|
||||
|
||||
## Aquamarine High Level Language
|
||||
|
||||
_**Stay Tuned -- Coming Soon To A Repo Near You**_
|
||||
|
@ -1,6 +0,0 @@
|
||||
# Aqua
|
||||
|
||||
## Aquamarine High Level Language
|
||||
|
||||
_**Stay Tuned -- Coming Soon To A Repo Near You**_
|
||||
|
@ -1,55 +0,0 @@
|
||||
# AIR
|
||||
|
||||
The Aquamarine Intermediate Representation \(AIR\) is a low level language to program both distributed networks and the services deployed on them. The language is comprised of a small number of instructions:
|
||||
|
||||
* _**call**_: : execution
|
||||
* _**seq**_ : sequential
|
||||
* _**par** :_ parallel
|
||||
* _**fold**_ : iteration
|
||||
* _**xor** :_ branching & error handling
|
||||
* _**null**_ : empty instruction
|
||||
|
||||
which operate on _peer-id_ \(location\), _service-id_, and _service method_ over an argument list, see Figure 1.
|
||||
|
||||
**Figure 1: AIR Instruction Definition** ![Execution](../../../.gitbook/assets/air_call_execution_1.png)
|
||||
|
||||
## Instructions
|
||||
|
||||
AIR instructions are intended to launch the execution of a service method as follows:
|
||||
|
||||
1. The method is executed on the peer specified by the peer id \(location\) parameter
|
||||
2. The peer is expected to have the Wasm service specified by the service id parameter
|
||||
3. The service must have a callable method specified be the method parameter
|
||||
4. The arguments specified by the argument list are passed to the method
|
||||
5. The result of the method returned under the name output name
|
||||
|
||||
**Figure 2: Sequential Instruction** ![Execution](../../../.gitbook/assets/air_sequential_2%20%281%29%20%281%29%20%281%29%20%281%29%20%281%29%20%282%29%20%283%29%20%284%29%20%284%29%20%284%29%20%281%29.png)
|
||||
|
||||
The _**seq**_ instruction takes two instructions at most as its arguments and executes them sequentially, one after the other.
|
||||
|
||||
**Figure 3: Parallel Instruction** ![Execution](../../../.gitbook/assets/air_par_3.png)
|
||||
|
||||
The _**par**_ instruction takes two instructions at most as its arguments and particles may execute on parallel paths iff each service referenced is hosted on a different node otherwise particles execute sequentially
|
||||
|
||||
TODO: add better graphic showing the disticntion of branching vs seq.
|
||||
|
||||
**Figure 4: Fold Instruction** ![Execution](https://github.com/fluencelabs/gitbook-docs/tree/84e814d02d9299034c9c031adf7f081bb59898b9/.gitbook/assets/air_fold_4%20%281%29%20%282%29%20%281%29.png)
|
||||
|
||||
The _**fold**_ instruction iterates over the elements of an array and workds as follows:
|
||||
|
||||
* _**fold**_ instruction takes three arguments: an array, a variable and an instruction
|
||||
* At each iteration, the variable is assigned an element of the array and the argument-instruction is executed
|
||||
* The argument-instruction can access the variable and uses the next statement to trigger the next iteration
|
||||
|
||||
Figure 5: Branching Instruction ![Execution](../../../.gitbook/assets/air_xor_5.png)
|
||||
|
||||
This instruction is intended for organizing branches in the flow of execution as well as for handling errors:
|
||||
|
||||
* The _**XOR**_ instruction takes two instructions as its arguments
|
||||
* The first instruction is executed and if the execution is successful, then the second instruction is ignored
|
||||
* If the first instruction fails, then the second one is executed.
|
||||
|
||||
**Figure 6: Null Instruction** ![Execution](https://github.com/fluencelabs/gitbook-docs/tree/84e814d02d9299034c9c031adf7f081bb59898b9/.gitbook/assets/air_null_6%20%281%29%20%282%29.png)
|
||||
|
||||
This is an empty instruction: it takes no arguments and does nothing. The _**null**_ instruction is useful for generating code.
|
||||
|
@ -1,2 +0,0 @@
|
||||
# Aqua VM
|
||||
|
@ -1,55 +0,0 @@
|
||||
# AIR
|
||||
|
||||
The Aquamarine Intermediate Representation \(AIR\) is a low level language to program both distributed networks and the services deployed on them. The language is comprised of a small number of instructions:
|
||||
|
||||
* _**call**_: : execution
|
||||
* _**seq**_ : sequential
|
||||
* _**par** :_ parallel
|
||||
* _**fold**_ : iteration
|
||||
* _**xor** :_ branching & error handling
|
||||
* _**null**_ : empty instruction
|
||||
|
||||
which operate on _peer-id_ \(location\), _service-id_, and _service method_ over an argument list, see Figure 1.
|
||||
|
||||
**Figure 1: AIR Instruction Definition** ![Execution](../../.gitbook/assets/air_call_execution_1.png)
|
||||
|
||||
## Instructions
|
||||
|
||||
AIR instructions are intended to launch the execution of a service method as follows:
|
||||
|
||||
1. The method is executed on the peer specified by the peer id \(location\) parameter
|
||||
2. The peer is expected to have the Wasm service specified by the service id parameter
|
||||
3. The service must have a callable method specified be the method parameter
|
||||
4. The arguments specified by the argument list are passed to the method
|
||||
5. The result of the method returned under the name output name
|
||||
|
||||
**Figure 2: Sequential Instruction** ![Execution](../../.gitbook/assets/air_sequential_2%20%281%29%20%281%29%20%281%29%20%281%29%20%281%29%20%282%29%20%283%29%20%284%29%20%282%29.png)
|
||||
|
||||
The _**seq**_ instruction takes two instructions at most as its arguments and executes them sequentially, one after the other.
|
||||
|
||||
**Figure 3: Parallel Instruction** ![Execution](../../.gitbook/assets/air_par_3.png)
|
||||
|
||||
The _**par**_ instruction takes two instructions at most as its arguments and particles may execute on parallel paths iff each service referenced is hosted on a different node otherwise particles execute sequentially
|
||||
|
||||
TODO: add better graphic showing the disticntion of branching vs seq.
|
||||
|
||||
**Figure 4: Fold Instruction** ![Execution](https://github.com/fluencelabs/gitbook-docs/tree/84e814d02d9299034c9c031adf7f081bb59898b9/.gitbook/assets/air_fold_4%20%281%29%20%282%29%20%281%29.png)
|
||||
|
||||
The _**fold**_ instruction iterates over the elements of an array and workds as follows:
|
||||
|
||||
* _**fold**_ instruction takes three arguments: an array, a variable and an instruction
|
||||
* At each iteration, the variable is assigned an element of the array and the argument-instruction is executed
|
||||
* The argument-instruction can access the variable and uses the next statement to trigger the next iteration
|
||||
|
||||
Figure 5: Branching Instruction ![Execution](../../.gitbook/assets/air_xor_5.png)
|
||||
|
||||
This instruction is intended for organizing branches in the flow of execution as well as for handling errors:
|
||||
|
||||
* The _**XOR**_ instruction takes two instructions as its arguments
|
||||
* The first instruction is executed and if the execution is successful, then the second instruction is ignored
|
||||
* If the first instruction fails, then the second one is executed.
|
||||
|
||||
**Figure 6: Null Instruction** ![Execution](https://github.com/fluencelabs/gitbook-docs/tree/84e814d02d9299034c9c031adf7f081bb59898b9/.gitbook/assets/air_null_6%20%281%29%20%282%29.png)
|
||||
|
||||
This is an empty instruction: it takes no arguments and does nothing. The _**null**_ instruction is useful for generating code.
|
||||
|
@ -1,16 +0,0 @@
|
||||
# Marine
|
||||
|
||||
[Marine](https://github.com/fluencelabs/marine) is a general purpose WebAssembly runtime favoring Wasm modules based on the [ECS](https://en.wikipedia.org/wiki/Entity_component_system) pattern or plugin architecture and uses Wasm [Interface Types](https://github.com/WebAssembly/interface-types/blob/master/proposals/interface-types/Explainer.mdhttps://github.com/WebAssembly/interface-types/blob/master/proposals/interface-types/Explainer.md) \( IT\) to implement a [shared-nothing](https://en.wikipedia.org/wiki/Shared-nothing_architecture) linking scheme. Fluence [nodes](https://github.com/fluencelabs/fluence) use Marine to host the Aqua VM and execute hosted Wasm services.
|
||||
|
||||
Todo: we could really do with diagram
|
||||
|
||||
The [Marine Rust SDK](https://github.com/fluencelabs/marine-rs-sdk) allows to hide the IT implementation details behind a handy procedural macro `[marine]` and provides the scaffolding for unit tests.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,16 +0,0 @@
|
||||
# Marine
|
||||
|
||||
[Marine](https://github.com/fluencelabs/marine) is a general purpose WebAssembly runtime favoring Wasm modules based on the [ECS](https://en.wikipedia.org/wiki/Entity_component_system) pattern or plugin architecture and uses Wasm [Interface Types](https://github.com/WebAssembly/interface-types/blob/master/proposals/interface-types/Explainer.mdhttps://github.com/WebAssembly/interface-types/blob/master/proposals/interface-types/Explainer.md) \( IT\) to implement a [shared-nothing](https://en.wikipedia.org/wiki/Shared-nothing_architecture) linking scheme. Fluence [nodes](https://github.com/fluencelabs/fluence) use Marine to host the Aqua VM and execute hosted Wasm services.
|
||||
|
||||
Todo: we could really do with diagram
|
||||
|
||||
The [Marine Rust SDK](https://github.com/fluencelabs/marine-rs-sdk) allows to hide the IT implementation details behind a handy procedural macro `[marine]` and provides the scaffolding for unit tests.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,35 +0,0 @@
|
||||
# Marine CLI
|
||||
|
||||
The [Marine command line tool](https://github.com/fluencelabs/marine) provides the project `marine build` functionality, analogous to `cargo build`, that results in the Rust code to be compiled to _wasm32-wasi_ modules. In addition, `marine` provides utilities to inspect Wasm modules, expose Wasm module attributes or manually set module properties.
|
||||
|
||||
```rust
|
||||
mbp16~(:|✔) % marine --help
|
||||
Fluence Marine command line tool 0.6.7
|
||||
Fluence Labs
|
||||
|
||||
USAGE:
|
||||
marine [SUBCOMMAND]
|
||||
|
||||
FLAGS:
|
||||
-h, --help Prints help information
|
||||
-V, --version Prints version information
|
||||
|
||||
SUBCOMMANDS:
|
||||
aqua Shows data types of provided module in a format suitable for Aqua
|
||||
build Builds provided Rust project to Wasm
|
||||
help Prints this message or the help of the given subcommand(s)
|
||||
info Shows manifest and sdk version of the provided Wasm file
|
||||
it Shows IT of the provided Wasm file
|
||||
repl Starts Fluence application service REPL
|
||||
set Sets interface types and version to the provided Wasm file
|
||||
mbp16~(:|✔) %
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,35 +0,0 @@
|
||||
# Marine Repl
|
||||
|
||||
[`mrepl`](https://crates.io/crates/mrepl) is a command line tool to locally run a Marine instance to inspect, run, and test Wasm modules and service configurations. We can run the Repl either with `mrepl` or `marine repl`
|
||||
|
||||
```text
|
||||
mbp16~(:|✔) % mrepl
|
||||
Welcome to the Marine REPL (version 0.7.2)
|
||||
Minimal supported versions
|
||||
sdk: 0.6.0
|
||||
interface-types: 0.20.0
|
||||
|
||||
New version is available! 0.7.2 -> 0.7.4
|
||||
To update run: cargo +nightly install mrepl --force
|
||||
|
||||
app service was created with service id = d81a4de5-55c3-4cb7-935c-3d5c6851320d
|
||||
elapsed time 486.234µs
|
||||
|
||||
1> help
|
||||
Commands:
|
||||
|
||||
n/new [config_path] create a new service (current will be removed)
|
||||
l/load <module_name> <module_path> load a new Wasm module
|
||||
u/unload <module_name> unload a Wasm module
|
||||
c/call <module_name> <func_name> [args] call function with given name from given module
|
||||
i/interface print public interface of all loaded modules
|
||||
e/envs <module_name> print environment variables of a module
|
||||
f/fs <module_name> print filesystem state of a module
|
||||
h/help print this message
|
||||
q/quit/Ctrl-C exit
|
||||
|
||||
2>
|
||||
```
|
||||
|
||||
|
||||
|
@ -1,537 +0,0 @@
|
||||
# Marine Rust SDK
|
||||
|
||||
The [marine-rs-sdk](https://github.com/fluencelabs/marine-rs-sdk) empowers developers to write services suitable for peer hosting in peer-to-peer networks using the Marine Virtual Machine by enabling the wasm32-wasi compile target for Marine. For an introduction to writing services with the marine-rs-sdk, see the [Developing Modules And Services](../../../development_development/) section.
|
||||
|
||||
### API
|
||||
|
||||
The procedural macros `[marine]` and `[marine_test]` are the two primary features provided by the SDK. The `[marine]` macro can be applied to a function, external block or structure. The `[marine_test]` macro, on the other hand, allows the use of the familiar `cargo test` to execute tests over the actual Wasm module generated from the service code.
|
||||
|
||||
#### Function Export
|
||||
|
||||
Applying the `[marine]` macro to a function results in its export, which means that it can be called from other modules or AIR scripts. For the function to be compatible with this macro, its arguments must be of the `ftype`, which is defined as follows:
|
||||
|
||||
`ftype` = `bool`, `u8`, `u16`, `u32`, `u64`, `i8`, `i16`, `i32`, `i64`, `f32`, `f64`, `String`
|
||||
`ftype` = `ftype` \| `Vec`<`ftype`>
|
||||
`ftype` = `ftype` \| `Record`<`ftype`>
|
||||
|
||||
In other words, the arguments must be one of the types listed below:
|
||||
|
||||
* one of the following Rust basic types: `bool`, `u8`, `u16`, `u32`, `u64`, `i8`, `i16`, `i32`, `i64`, `f32`, `f64`, `String`
|
||||
* a vector of elements of the above types
|
||||
* a vector composed of vectors of the above type, where recursion is acceptable, e.g. the type `Vec<Vec<Vec<u8>>>` is permissible
|
||||
* a record, where all fields are of the basic Rust types
|
||||
* a record, where all fields are of any above types or other records
|
||||
|
||||
The return type of a function must follow the same rules, but currently only one return type is possible.
|
||||
|
||||
See the example below of an exposed function with a complex type signature and return value:
|
||||
|
||||
```rust
|
||||
// export TestRecord as a public data structure bound by
|
||||
// the IT type constraints
|
||||
#[marine]
|
||||
pub struct TestRecord {
|
||||
pub field_0: i32,
|
||||
pub field_1: Vec<Vec<u8>>,
|
||||
}
|
||||
|
||||
// export foo as a public function bound by the
|
||||
// IT type contraints
|
||||
#[marine] #
|
||||
pub fn foo(arg_1: Vec<Vec<Vec<Vec<TestRecord>>>>, arg_2: String) -> Vec<Vec<Vec<Vec<TestRecord>>>> {
|
||||
unimplemented!()
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
{% hint style="info" %}
|
||||
Function Export Requirements
|
||||
|
||||
* wrap a target function with the `[marine]` macro
|
||||
* function arguments must by of `ftype`
|
||||
* the function return type also must be of `ftype`
|
||||
{% endhint %}
|
||||
|
||||
#### Function Import
|
||||
|
||||
The `[marine]` macro can also wrap an [`extern` block](https://doc.rust-lang.org/std/keyword.extern.html). In this case, all functions declared in it are considered imported functions. If there are imported functions in some module, say, module A, then:
|
||||
|
||||
* There should be another module, module B, that exports the same functions. The name of module B is indicated in the `link` macro \(see examples below\).
|
||||
* Module B should be loaded to `Marine` by the moment the loading of module A starts. Module A cannot be loaded if at least one imported function is absent in `Marine`.
|
||||
|
||||
See the examples below for wrapped `extern` block usage:
|
||||
|
||||
{% tabs %}
|
||||
{% tab title="Example 1" %}
|
||||
```rust
|
||||
#[marine]
|
||||
pub struct TestRecord {
|
||||
pub field_0: i32,
|
||||
pub field_1: Vec<Vec<u8>>,
|
||||
}
|
||||
|
||||
// wrap the extern block with the marine macro to expose the function
|
||||
// as an import to the Marine VM
|
||||
#[marine]
|
||||
#[link(wasm_import_module = "some_module")]
|
||||
extern "C" {
|
||||
pub fn foo(arg: Vec<Vec<Vec<Vec<TestRecord>>>>, arg_2: String) -> Vec<Vec<Vec<Vec<TestRecord>>>>;
|
||||
}
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
{% tab title="Example 2" %}
|
||||
```rust
|
||||
[marine]
|
||||
#[link(wasm_import_module = "some_module")]
|
||||
extern "C" {
|
||||
pub fn foo(arg: Vec<Vec<Vec<Vec<u8>>>>) -> Vec<Vec<Vec<Vec<u8>>>>;
|
||||
}
|
||||
```
|
||||
{% endtab %}
|
||||
{% endtabs %}
|
||||
|
||||
|
||||
|
||||
{% hint style="info" %}
|
||||
|
||||
|
||||
#### Function import requirements
|
||||
|
||||
* wrap an extern block with the function\(s\) to be imported with the `[marine]` macro
|
||||
* all function\(s\) arguments must be of the `ftype` type
|
||||
* the return type of the function\(s\) must be `ftype`
|
||||
{% endhint %}
|
||||
|
||||
####
|
||||
|
||||
#### Structures
|
||||
|
||||
Finally, the `[marine]` macro can wrap a `struct` making possible to use it as a function argument or return type. Note that
|
||||
|
||||
* only macro-wrapped structures can be used as function arguments and return types
|
||||
* all fields of the wrapped structure must be public and of the `ftype`.
|
||||
* it is possible to have inner records in the macro-wrapped structure and to import wrapped structs from other crates
|
||||
|
||||
See the example below for wrapping `struct`:
|
||||
|
||||
{% tabs %}
|
||||
{% tab title="Example 1" %}
|
||||
```rust
|
||||
#[marine]
|
||||
pub struct TestRecord0 {
|
||||
pub field_0: i32,
|
||||
}
|
||||
|
||||
#[marine]
|
||||
pub struct TestRecord1 {
|
||||
pub field_0: i32,
|
||||
pub field_1: String,
|
||||
pub field_2: Vec<u8>,
|
||||
pub test_record_0: TestRecord0,
|
||||
}
|
||||
|
||||
#[marine]
|
||||
pub struct TestRecord2 {
|
||||
pub test_record_0: TestRecord0,
|
||||
pub test_record_1: TestRecord1,
|
||||
}
|
||||
|
||||
#[marine]
|
||||
fn foo(mut test_record: TestRecord2) -> TestRecord2 { unimplemented!(); }
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
{% tab title="Example 2" %}
|
||||
```rust
|
||||
#[fce]
|
||||
pub struct TestRecord0 {
|
||||
pub field_0: i32,
|
||||
}
|
||||
|
||||
#[fce]
|
||||
pub struct TestRecord1 {
|
||||
pub field_0: i32,
|
||||
pub field_1: String,
|
||||
pub field_2: Vec<u8>,
|
||||
pub test_record_0: TestRecord0,
|
||||
}
|
||||
|
||||
#[fce]
|
||||
pub struct TestRecord2 {
|
||||
pub test_record_0: TestRecord0,
|
||||
pub test_record_1: TestRecord1,
|
||||
}
|
||||
|
||||
#[fce]
|
||||
#[link(wasm_import_module = "some_module")]
|
||||
extern "C" {
|
||||
fn foo(mut test_record: TestRecord2) -> TestRecord2;
|
||||
}
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
{% tab title="Example 3" %}
|
||||
```rust
|
||||
mod data_crate {
|
||||
use fluence::marine;
|
||||
#[marine]
|
||||
pub struct Data {
|
||||
pub name: String,
|
||||
pub data: f64,
|
||||
}
|
||||
}
|
||||
|
||||
use data_crate::Data;
|
||||
use fluence::marine;
|
||||
|
||||
fn main() {}
|
||||
|
||||
#[marine]
|
||||
fn some_function() -> Data {
|
||||
Data {
|
||||
name: "example".into(),
|
||||
data: 1.0,
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
{% endtab %}
|
||||
{% endtabs %}
|
||||
|
||||
|
||||
|
||||
{% hint style="info" %}
|
||||
|
||||
|
||||
> #### Structure passing requirements
|
||||
>
|
||||
> * wrap a structure with the `[marine]` macro
|
||||
> * all structure fields must be of the `ftype`
|
||||
> * the structure must be pointed to without preceding package import in a function signature, i.e`StructureName` but not `package_name::module_name::StructureName`
|
||||
> * wrapped structs can be imported from crates
|
||||
{% endhint %}
|
||||
|
||||
####
|
||||
|
||||
#### Call Parameters
|
||||
|
||||
There is a special API function `fluence::get_call_parameters()` that returns an instance of the [`CallParameters`](https://github.com/fluencelabs/marine-rs-sdk/blob/master/fluence/src/call_parameters.rs#L35) structure defined as follows:
|
||||
|
||||
```rust
|
||||
pub struct CallParameters {
|
||||
/// Peer id of the AIR script initiator.
|
||||
pub init_peer_id: String,
|
||||
|
||||
/// Id of the current service.
|
||||
pub service_id: String,
|
||||
|
||||
/// Id of the service creator.
|
||||
pub service_creator_peer_id: String,
|
||||
|
||||
/// Id of the host which run this service.
|
||||
pub host_id: String,
|
||||
|
||||
/// Id of the particle which execution resulted a call this service.
|
||||
pub particle_id: String,
|
||||
|
||||
/// Security tetraplets which described origin of the arguments.
|
||||
pub tetraplets: Vec<Vec<SecurityTetraplet>>,
|
||||
}
|
||||
```
|
||||
|
||||
CallParameters are especially useful in constructing authentication services:
|
||||
|
||||
```text
|
||||
// auth.rs
|
||||
use fluence::{marine, CallParameters};
|
||||
use::marine;
|
||||
|
||||
pub fn is_owner() -> bool {
|
||||
let meta = marine::get_call_parameters();
|
||||
let caller = meta.init_peer_id;
|
||||
let owner = meta.service_creator_peer_id;
|
||||
|
||||
caller == owner
|
||||
}
|
||||
|
||||
#[marine]
|
||||
pub fn am_i_owner() -> bool {
|
||||
is_owner()
|
||||
}
|
||||
```
|
||||
|
||||
####
|
||||
|
||||
#### MountedBinaryResult
|
||||
|
||||
Due to the inherent limitations of Wasm modules, such as a lack of sockets, it may be necessary for a module to interact with its host to bridge such gaps, e.g. use a https transport provider like _curl_. In order for a Wasm module to use a host's _curl_ capabilities, we need to provide access to the binary, which at the code level is achieved through the Rust `extern` block:
|
||||
|
||||
```rust
|
||||
// Importing a linked binary, curl, to a Wasm module
|
||||
#![allow(improper_ctypes)]
|
||||
|
||||
use fluence::marine;
|
||||
use fluence::module_manifest;
|
||||
use fluence::MountedBinaryResult;
|
||||
|
||||
module_manifest!();
|
||||
|
||||
pub fn main() {}
|
||||
|
||||
#[marine]
|
||||
pub fn curl_request(curl_cmd: Vec<String>) -> MountedBinaryResult {
|
||||
let response = curl(curl_cmd);
|
||||
response
|
||||
}
|
||||
|
||||
#[marine]
|
||||
#[link(wasm_import_module = "host")]
|
||||
extern "C" {
|
||||
fn curl(cmd: Vec<String>) -> MountedBinaryResult;
|
||||
}
|
||||
```
|
||||
|
||||
The above code creates a "curl adapter", i.e., a Wasm module that allows other Wasm modules to use the the `curl_request` function, which calls the imported _curl_ binary in this case, to make http calls. Please note that we are wrapping the `extern` block with the `[marine]`macro and introduce a Marine-native data structure [`MountedBinaryResult`](https://github.com/fluencelabs/marine/blob/master/examples/url-downloader/curl_adapter/src/main.rs) as the linked-function return value.
|
||||
|
||||
Please not that if you want to use `curl_request` with testing, see below, the curl call needs to be marked unsafe, e.g.:
|
||||
|
||||
```rust
|
||||
let response = unsafe { curl(curl_cmd) };
|
||||
```
|
||||
|
||||
since cargo does not have access to the magic in place in the marine rs sdk to handle unsafe.
|
||||
|
||||
MountedBinaryResult itself is a Marine-compatible struct containing a binary's return process code, error string and stdout and stderr as byte arrays:
|
||||
|
||||
```rust
|
||||
#[marine]
|
||||
#[derive(Clone, PartialEq, Default, Eq, Debug, Serialize, Deserialize)]
|
||||
pub struct MountedBinaryResult {
|
||||
/// Return process exit code or host execution error code, where SUCCESS_CODE means success.
|
||||
pub ret_code: i32,
|
||||
|
||||
/// Contains the string representation of an error, if ret_code != SUCCESS_CODE.
|
||||
pub error: String,
|
||||
|
||||
/// The data that the process wrote to stdout.
|
||||
pub stdout: Vec<u8>,
|
||||
|
||||
/// The data that the process wrote to stderr.
|
||||
pub stderr: Vec<u8>,
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
MountedBinaryResult then can be used on a variety of match or conditional tests.
|
||||
|
||||
|
||||
|
||||
#### Testing
|
||||
|
||||
Since we are compiling to a wasm32-wasi target with `ftype` constrains, the basic `cargo test` is not all that useful or even usable for our purposes. To alleviate that limitation, Fluence has introduced the [`[marine-test]` macro ](https://github.com/fluencelabs/marine-rs-sdk/tree/master/crates/marine-test-macro)that does a lot of the heavy lifting to allow developers to use `cargo test` as intended. That is, `[marine-test]` macro generates the necessary code to call Marine, one instance per test function, based on the Wasm module and associated configuration file so that the actual test function is run against the Wasm module not the native code.
|
||||
|
||||
Let's have a look at an implementation example:
|
||||
|
||||
```rust
|
||||
use fluence::marine;
|
||||
use fluence::module_manifest;
|
||||
|
||||
module_manifest!();
|
||||
|
||||
pub fn main() {}
|
||||
|
||||
#[marine]
|
||||
pub fn greeting(name: String) -> String { # 1
|
||||
format!("Hi, {}", name)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use fluence_test::marine_test; # 2
|
||||
|
||||
#[marine_test(config_path = "../Config.toml", modules_dir = "../artifacts")] # 3
|
||||
fn empty_string() {
|
||||
let actual = greeting.greeting(String::new()); # 4
|
||||
assert_eq!(actual, "Hi, ");
|
||||
}
|
||||
|
||||
#[marine_test(config_path = "../Config.toml", modules_dir = "../artifacts")]
|
||||
fn non_empty_string() {
|
||||
let actual = greeting.greeting("name".to_string());
|
||||
assert_eq!(actual, "Hi, name");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
1. We wrap a basic _greeting_ function with the `[marine`\] macro which results in the greeting.wasm module
|
||||
2. We wrap our tests as usual with `[cfg(test)]` and import the fluence_test crate._ Do **not** import _super_ or the _local crate_.
|
||||
3. Instead, we apply the `[marine_test]` to each of the test functions by providing the path to the config file, e.g., Config.toml, and the directory containing the Wasm module we obtained after compiling our project with `marine build`. It is imperative that project compilation proceeds the test runner otherwise there won't be the required Wasm file.
|
||||
4. The target of our tests is the `pub fn greeting` function. Since we are calling the function from the Wasm module we must prefix the function name with the module namespace -- `greeting` in this example case.
|
||||
|
||||
Now that we have our Wasm module and tests in place, we can proceed with `cargo test --release.` Note that using the `release`vastly improves the import speed of the necessary Wasm modules.
|
||||
|
||||
### Features
|
||||
|
||||
The SDK has two useful features: `logger` and `debug`.
|
||||
|
||||
#### Logger
|
||||
|
||||
Using logging is a simple way to assist in debugging without deploying the module\(s\) to a peer-to-peer network node. The `logger` feature allows you to use a special logger that is based at the top of the [log](https://crates.io/crates/log) crate.
|
||||
|
||||
To enable logging please specify the `logger` feature of the Fluence SDK in `Config.toml` and add the [log](https://docs.rs/log/0.4.11/log/) crate:
|
||||
|
||||
```rust
|
||||
[dependencies]
|
||||
log = "0.4.14"
|
||||
fluence = { version = "0.6.9", features = ["logger"] }
|
||||
```
|
||||
|
||||
The logger should be initialized before its usage. This can be done in the `main` function as shown in the example below.
|
||||
|
||||
```rust
|
||||
use fluence::marine;
|
||||
use fluence::WasmLogger;
|
||||
|
||||
pub fn main() {
|
||||
WasmLogger::new()
|
||||
// with_log_level can be skipped,
|
||||
// logger will be initialized with Info level in this case.
|
||||
.with_log_level(log::Level::Info)
|
||||
.build()
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[marine]
|
||||
pub fn put(name: String, file_content: Vec<u8>) -> String {
|
||||
log::info!("put called with file name {}", file_name);
|
||||
unimplemented!()
|
||||
}
|
||||
```
|
||||
|
||||
In addition to the standard log creation features, the Fluence logger allows the so-called target map to be configured during the initialization step. This allows you to filter out logs by `logging_mask`, which can be set for each module in the service configuration. Let's consider an example:
|
||||
|
||||
```rust
|
||||
const TARGET_MAP: [(&str, i64); 4] = [
|
||||
("instruction", 1 << 1),
|
||||
("data_cache", 1 << 2),
|
||||
("next_peer_pks", 1 << 3),
|
||||
("subtree_complete", 1 << 4),
|
||||
];
|
||||
|
||||
pub fn main() {
|
||||
use std::collections::HashMap;
|
||||
use std::iter::FromIterator;
|
||||
|
||||
let target_map = HashMap::from_iter(TARGET_MAP.iter().cloned());
|
||||
|
||||
fluence::WasmLogger::new()
|
||||
.with_target_map(target_map)
|
||||
.build()
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[marine]
|
||||
pub fn foo() {
|
||||
log::info!(target: "instruction", "this will print if (logging_mask & 1) != 0");
|
||||
log::info!(target: "data_cache", "this will print if (logging_mask & 2) != 0");
|
||||
}
|
||||
```
|
||||
|
||||
Here, an array called `TARGET_MAP` is defined and provided to a logger in the `main` function of a module. Each entry of this array contains a string \(a target\) and a number that represents the bit position in the 64-bit mask `logging_mask`. When you write a log message request `log::info!`, its target must coincide with one of the strings \(the targets\) defined in the `TARGET_MAP` array. The log will be printed if `logging_mask` for the module has the corresponding target bit set.
|
||||
|
||||
{% hint style="info" %}
|
||||
REPL also uses the log crate to print logs from Wasm modules. Log messages will be printed if`RUST_LOG` environment variable is specified.
|
||||
{% endhint %}
|
||||
|
||||
|
||||
|
||||
#### Debug
|
||||
|
||||
The application of the second feature is limited to obtaining some of the internal details of the IT execution. Normally, this feature should not be used by a backend developer. Here you can see example of such details for the greeting service compiled with the `debug` feature:
|
||||
|
||||
```bash
|
||||
# running the greeting service compiled with debug feature
|
||||
~ $ RUST_LOG="info" fce-repl Config.toml
|
||||
Welcome to the Fluence FaaS REPL
|
||||
app service's created with service id = e5cfa463-ff50-4996-98d8-4eced5ac5bb9
|
||||
elapsed time 40.694769ms
|
||||
|
||||
1> call greeting greeting "user"
|
||||
[greeting] sdk.allocate: 4
|
||||
[greeting] sdk.set_result_ptr: 1114240
|
||||
[greeting] sdk.set_result_size: 8
|
||||
[greeting] sdk.get_result_ptr, returns 1114240
|
||||
[greeting] sdk.get_result_size, returns 8
|
||||
[greeting] sdk.get_result_ptr, returns 1114240
|
||||
[greeting] sdk.get_result_size, returns 8
|
||||
[greeting] sdk.deallocate: 0x110080 8
|
||||
|
||||
result: String("Hi, user")
|
||||
elapsed time: 222.675µs
|
||||
```
|
||||
|
||||
The most important information these logs relates to the `allocate`/`deallocate` function calls. The `sdk.allocate: 4` line corresponds to passing the 4-byte `user` string to the Wasm module, with the memory allocated inside the module and the string is copied there. Whereas `sdk.deallocate: 0x110080 8` refers to passing the 8-byte resulting string `Hi, user` to the host side. Since all arguments and results are passed by value, `deallocate` is called to delete unnecessary memory inside the Wasm module.
|
||||
|
||||
|
||||
|
||||
#### Module Manifest
|
||||
|
||||
The `module_manifest!` macro embeds the Interface Type \(IT\), SDK and Rust project version as well as additional project and build information into Wasm module. For the macro to be usable, it needs to be imported and initialized in the _main.rs_ file:
|
||||
|
||||
```text
|
||||
// main.rs
|
||||
use fluence::marine;
|
||||
use fluence::module_manifest; // import manifest macro
|
||||
|
||||
module_manifest!(); // initialize macro
|
||||
|
||||
fn main() {}
|
||||
|
||||
#[marine]
|
||||
fn some_function() {}
|
||||
}
|
||||
```
|
||||
|
||||
Using the Marine CLI, we can inspect a module's manifest with `marine info`:
|
||||
|
||||
```rust
|
||||
mbp16~/localdev/struct-exp(main|…) % marine info -i artifacts/*.wasm
|
||||
it version: 0.20.1
|
||||
sdk version: 0.6.0
|
||||
authors: The Fluence Team
|
||||
version: 0.1.0
|
||||
description: foo-wasm, a Marine wasi module
|
||||
repository:
|
||||
build time: 2021-06-11 21:08:59.855352 +00:00 UTC
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,2 +0,0 @@
|
||||
# Aqua VM
|
||||
|
@ -1,78 +0,0 @@
|
||||
# Tools
|
||||
|
||||
## Fluence Marine REPL
|
||||
|
||||
[`mrepl`](https://crates.io/crates/mrepl) is a command line tool \(CLI\) to locally run a Marine instance to inspect, run, and test module and service configurations.
|
||||
|
||||
```text
|
||||
mbp16~(:|✔) % mrepl
|
||||
Welcome to the Marine REPL (version 0.7.2)
|
||||
Minimal supported versions
|
||||
sdk: 0.6.0
|
||||
interface-types: 0.20.0
|
||||
|
||||
New version is available! 0.7.2 -> 0.7.4
|
||||
To update run: cargo +nightly install mrepl --force
|
||||
|
||||
app service was created with service id = d81a4de5-55c3-4cb7-935c-3d5c6851320d
|
||||
elapsed time 486.234µs
|
||||
|
||||
1> help
|
||||
Commands:
|
||||
|
||||
n/new [config_path] create a new service (current will be removed)
|
||||
l/load <module_name> <module_path> load a new Wasm module
|
||||
u/unload <module_name> unload a Wasm module
|
||||
c/call <module_name> <func_name> [args] call function with given name from given module
|
||||
i/interface print public interface of all loaded modules
|
||||
e/envs <module_name> print environment variables of a module
|
||||
f/fs <module_name> print filesystem state of a module
|
||||
h/help print this message
|
||||
q/quit/Ctrl-C exit
|
||||
|
||||
2>
|
||||
```
|
||||
|
||||
## Fluence Proto Distributor: FLDIST
|
||||
|
||||
\`\`[`fldist`](https://github.com/fluencelabs/proto-distributor) is a command line interface \(CLI\) to Fluence peers allowing for the lifecycle management of services and offers the fastest and most effective way to service deployment.
|
||||
|
||||
```text
|
||||
mbp16~(:|✔) % fldist --help
|
||||
Usage: fldist <cmd> [options]
|
||||
|
||||
Commands:
|
||||
fldist completion generate completion script
|
||||
fldist upload Upload selected wasm
|
||||
fldist get_modules Print all modules on a node
|
||||
fldist get_interfaces Print all services on a node
|
||||
fldist get_interface Print a service interface
|
||||
fldist add_blueprint Add a blueprint
|
||||
fldist create_service Create a service from existing blueprint
|
||||
fldist new_service Create service from a list of modules
|
||||
fldist deploy_app Deploy application
|
||||
fldist create_keypair Generates a random keypair
|
||||
fldist run_air Send an air script from a file. Send arguments to
|
||||
"returnService" back to the client to print them in the
|
||||
console. More examples in "scripts_examples" directory.
|
||||
fldist env show nodes in currently selected environment
|
||||
|
||||
Options:
|
||||
--help Show help [boolean]
|
||||
--version Show version number [boolean]
|
||||
-s, --seed Client seed [string]
|
||||
--env Environment to use
|
||||
[required] [choices: "dev", "testnet", "local"] [default: "testnet"]
|
||||
--node-id, --node PeerId of the node to use
|
||||
--node-addr Multiaddr of the node to use
|
||||
--log log level
|
||||
[required] [choices: "trace", "debug", "info", "warn", "error"] [default:
|
||||
"error"]
|
||||
--ttl particle time to live in ms
|
||||
[number] [required] [default: 60000]
|
||||
```
|
||||
|
||||
## Fluence JS SDK
|
||||
|
||||
The Fluence [JS SDK](https://github.com/fluencelabs/fluence-js) supports developers to build full-fledged applications for a variety of targets ranging from browsers to backend apps and greatly expands on the `fldist` capabilities.
|
||||
|
@ -1,403 +0,0 @@
|
||||
# Builtin Services
|
||||
|
||||
## Overview
|
||||
|
||||
Each Fluence peer is equipped with a set of "built-in" services that can be called from Aquamarine and fall into the following namespaces:
|
||||
|
||||
1. _peer_ - operations related to connectivity or state of a given peer
|
||||
2. _kad_ - Kademlia API
|
||||
3. _srv_ – management and information about services on a node
|
||||
4. _dist_ – distribution and inspection of modules and blueprints
|
||||
5. _script_ – to manage recurring scripts
|
||||
6. _op_ – basic operations on data deprecated - namespace for deprecated API Below is the reference documentation for all the existing built-in services. Please refer to the JS SDK documentation to learn how to easily use them from the JS SDK
|
||||
7. _deprecated_ - namespace for deprecated API
|
||||
|
||||
Please note that the [`fldist`](knowledge_tools.md#fluence-proto-distributor-fldist) CLI tool, as well as the [JS SDK](knowledge_tools.md#fluence-js-sdk), provide access to node-based services.
|
||||
|
||||
## API
|
||||
|
||||
### peer is\_connected
|
||||
|
||||
Checks if there is a direct connection to the peer identified by a given PeerId
|
||||
|
||||
* **Arguments**:
|
||||
* PeerId – id of the peer to check if there's a connection with
|
||||
* **Returns**: bool - true if connected to the peer, false otherwise
|
||||
|
||||
Example of a service call:
|
||||
|
||||
```scheme
|
||||
(call node ("peer" "is_connected") ["123D..."] ok)
|
||||
```
|
||||
|
||||
Initiates a connection to the specified peer
|
||||
|
||||
* **Arguments**
|
||||
* _PeerId_ – id of the target peer
|
||||
* [_Multiaddr_](https://crates.io/crates/multiaddr) – an array of target peer's addresses
|
||||
* **Returns**: bool - true if connection was successful
|
||||
|
||||
Example of a service call:
|
||||
|
||||
```scheme
|
||||
(seq
|
||||
(call node ("op" "identity") ["/ip4/1.2.3.4/tcp/7777" "/ip4/1.2.3.4/tcp/9999"] addrs)
|
||||
(call node ("peer" "connect") ["123D..." addrs] ok)
|
||||
)
|
||||
```
|
||||
|
||||
### peer get\_contact
|
||||
|
||||
Resolves the contact of a peer via [Kademlia](https://en.wikipedia.org/wiki/Kademlia)
|
||||
|
||||
* **Arguments**
|
||||
* _PeerId_ – id of the target peer
|
||||
* **Returns**: Contact - true if connection was successful
|
||||
|
||||
```rust
|
||||
// get_contact return struct
|
||||
Contact {
|
||||
peer_id: PeerId,
|
||||
addresses: [Multiaddr]
|
||||
}
|
||||
```
|
||||
|
||||
Example of a service call:
|
||||
|
||||
```scheme
|
||||
(call node ("peer" "get_contact") ["123D..."] contact)
|
||||
```
|
||||
|
||||
#### peer identify
|
||||
|
||||
Get information about the peer
|
||||
|
||||
* **Arguments**: None
|
||||
* **Returns:** _external address_
|
||||
|
||||
```javascript
|
||||
{ "external_addresses": [ "/ip4/1.2.3.4/tcp/7777", "/dns4/stage.fluence.dev/tcp/19002" ] }
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("peer" "identify") [] info) peer timestamp_ms
|
||||
```
|
||||
|
||||
### peer timestamp\_ms
|
||||
|
||||
Get Unix timestamp in milliseconds
|
||||
|
||||
* **Arguments**: None
|
||||
* **Returns**: _u128_ - number of milliseconds since 1970
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("peer" "timestamp_ms") [] ts_ms)
|
||||
```
|
||||
|
||||
### peer timestamp\_sec
|
||||
|
||||
Get Unix timestamp in seconds
|
||||
|
||||
* **Arguments**: None
|
||||
* **Returns**: _u64_ - number of seconds since 1970
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("peer" "timestamp_sec") [] ts_sec)
|
||||
```
|
||||
|
||||
### kad neighborhood
|
||||
|
||||
Instructs node to return the locally-known nodes in the Kademlia neighborhood for a given key
|
||||
|
||||
* **Arguments**: _key_ – the peer ID \(PeerId\) of the node
|
||||
* **Returns**: _peers_ – an array of PeerIds of the nodes that are in the Kademlia neighborhood for the given hash\(key\)
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("dht" "neighborhood") [key] peers)
|
||||
```
|
||||
|
||||
Please note that this service does _not_ traverse the network and may yield incomplete neighborhood.
|
||||
|
||||
### srv create
|
||||
|
||||
Used to create a service on a certain node.
|
||||
|
||||
* **Arguments**:
|
||||
* blueprint\_id – ID of the blueprint that has been added to the node specified in the service call by the dist add\_blueprint service.
|
||||
* **Returns**: service\_id – the service ID of the created service.
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("srv" "create") [blueprint_id] service_id)
|
||||
```
|
||||
|
||||
### srv list
|
||||
|
||||
Used to enumerate services deployed to a peer.
|
||||
|
||||
* **Arguments**: None
|
||||
* **Returns**: a list of services running on a peer
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("srv" "list") [] services)
|
||||
```
|
||||
|
||||
### srv add\_alias
|
||||
|
||||
Adds an alias on service, so service could be called not only by service\_id but by alias.
|
||||
|
||||
* **Argument**: alias - settable service name service\_id – ID of the service whose interface you want to name.
|
||||
* **Returns**: alias id
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("srv" "add_alias") [alias service_id])
|
||||
```
|
||||
|
||||
### srv get\_interface
|
||||
|
||||
Retrieves the functional interface of a service running on the node specified in the service call.
|
||||
|
||||
* Argument: service\_id – ID of the service whose interface you want to retrieve.
|
||||
* Returns : an interface object of the following structure:
|
||||
|
||||
```typescript
|
||||
{
|
||||
interface: { function_signatures, record_types },
|
||||
blueprint_id: "uuid-1234...",
|
||||
service_id: "uuid-1234..."
|
||||
}
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("srv" "get_interface") [service_id] interface)
|
||||
```
|
||||
|
||||
### dist add\_module
|
||||
|
||||
Used to add modules to the node specified in the service call.
|
||||
|
||||
* Arguments:
|
||||
|
||||
* bytes – a base64 string containing the .wasm module to add.
|
||||
* config – an object of the following structure
|
||||
|
||||
```javascript
|
||||
{
|
||||
"name": "my_module_name"
|
||||
}
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("dist" "add_module") [bytes config] hash)
|
||||
```
|
||||
|
||||
### dist list\_modules
|
||||
|
||||
Get a list of modules available on the node
|
||||
|
||||
* Arguments: None
|
||||
* Returns: an array of objects containing module descriptions
|
||||
|
||||
```javascript
|
||||
[
|
||||
{
|
||||
"name": "moduleA",
|
||||
"hash": "6ebff28c",
|
||||
"config": { "name": "moduleA" }
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("dist" "list_modules") [] modules)
|
||||
```
|
||||
|
||||
### dist get\_module\_interface
|
||||
|
||||
Get the interface of a module
|
||||
|
||||
* Arguments: hash of a module
|
||||
* Returns: an interface of the module \( see _srv get\_interface \)_ mple of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("dist" "get_interface") [hash] interface)
|
||||
```
|
||||
|
||||
### dist add\_blueprint
|
||||
|
||||
Used to add a blueprint to the node specified in the service call.
|
||||
|
||||
* Arguments: blueprint – an object of the following structure
|
||||
|
||||
```javascript
|
||||
{
|
||||
"name": "good_service",
|
||||
"dependencies": [ "hash:6ebff28c...", "hash:1e59875a...", "hash:d164a07..." ]
|
||||
}
|
||||
```
|
||||
|
||||
```text
|
||||
Where module dependencies are specified as [_blake3_](https://crates.io/crates/blake3) hashes of modules
|
||||
```
|
||||
|
||||
* Returns: Generated blueprint id
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("dist" "add_blueprint") [blueprint] blueprint_id)
|
||||
```
|
||||
|
||||
### dist list\_blueprints
|
||||
|
||||
Used to get the blueprints available on the node specified in the service call.
|
||||
|
||||
* Arguments: None
|
||||
* Returns: an array of blueprint structures.
|
||||
|
||||
A blueprint is an object of the following structure:
|
||||
|
||||
```javascript
|
||||
{
|
||||
"id": "uuid-1234-...",
|
||||
"name": "good_service",
|
||||
"dependencies": [ "hash:6ebff28c...", "hash:1e59875a...", "hash:d164a07..." ]
|
||||
}
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("dist" "list_blueprints") [] blueprints)
|
||||
```
|
||||
|
||||
#### script add
|
||||
|
||||
Adds a given script to a node. That script will be called with a fixed interval with the default setting at approx. three \(3\) seconds.
|
||||
|
||||
Recurring scripts can't read variables from data, they must be literal. That means that every address or value must be specified as a literal: \(call "QmNode" \("service\_id-1234-uuid" "function"\) \["arg1" "arg2"\]\).
|
||||
|
||||
* Arguments:
|
||||
* _script_ – a string containing "literal" script
|
||||
* _interval_ – an optional string containing interval in seconds. If set, the script will be executed periodically at that interval. If omitted, the script will be executed only once. All intervals are rounded to 3 seconds. The minimum interval is 3 seconds.
|
||||
* Returns: uuid – script id that can be used to remove that script
|
||||
|
||||
Example of service call:
|
||||
|
||||
* Without an interval parameter value, the script executes once:
|
||||
|
||||
```text
|
||||
(call node ("script" "add") [script] id)
|
||||
```
|
||||
|
||||
* With an interval parameter value _k_ passed as a string, the script executes every _k_ seconds \(21 in this case\)
|
||||
|
||||
```scheme
|
||||
(call node ("script" "add") [script "21"] id)
|
||||
```
|
||||
|
||||
### script remove
|
||||
|
||||
Removes recurring script from a node. Only a creator of the script can delete it
|
||||
|
||||
* Arguments: _script id_ \(as received from _script add_\)
|
||||
* Returns: true if the script was deleted and false otherwise
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("script" "remove") [script_id] result)
|
||||
```
|
||||
|
||||
### script list
|
||||
|
||||
* Arguments: None
|
||||
* Returns: A list of existing scripts on the node. Each object in the list is of the following structure:
|
||||
|
||||
```javascript
|
||||
{
|
||||
"id": "uuid-1234-...",
|
||||
"src": "(seq (call ...", //
|
||||
"failures": 0,
|
||||
"interval": "21s",
|
||||
"owner": "123DKooAbcEfgh..."
|
||||
}
|
||||
```
|
||||
|
||||
Example of a service call:
|
||||
|
||||
```scheme
|
||||
(call node ("script" "list") [] list)
|
||||
```
|
||||
|
||||
### op identity
|
||||
|
||||
Acts as an identity function. This service returns exactly what was passed to it. Useful for moving the execution of some service topologically or for extracting some data and putting it into an output variable.
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("op" "identity") [args] result)
|
||||
```
|
||||
|
||||
### deprecated add\_provider
|
||||
|
||||
Used in service aliasing. \_\*\*\_Stores the specified service provider \(provider\) in the internal storage of the node indicated in the service call and associates it with the given key \(key\). After executing add\_provider, the provider can be accessed via the get\_providers service using this key.
|
||||
|
||||
* Arguments:
|
||||
|
||||
* key – a string; usually, it is a human-readable service alias.
|
||||
* provider – the location of the service. It is an object of the following structure:
|
||||
|
||||
```javascript
|
||||
{
|
||||
"peer": "123D...", // PeerId of some peer in the network
|
||||
"service_id": "uuid-1234-..." // Optional service_id of the service running on the peer specified by peer
|
||||
}
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("deprecated" "add_provider") [key provider])
|
||||
```
|
||||
|
||||
### deprecated get\_providers
|
||||
|
||||
Used in service aliasing to retrieve providers for a given key.
|
||||
|
||||
* Arguments: _key_ – a string; usually, it is a human-readable service alias.
|
||||
* Returns: an array of objects of the following structure:
|
||||
|
||||
```javascript
|
||||
{
|
||||
"peer": "123D...", // required field
|
||||
"service_id": "uuid-1234-..." // optional field
|
||||
}
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("deprecated" "get_providers") [key] providers)
|
||||
```
|
||||
|
@ -1,159 +0,0 @@
|
||||
# Security
|
||||
|
||||
In the Fluence network, an application consists of one or more services composed with Aquamarine. Services expose actions in the form of functions, and these actions may require authorization. In this section, we present the concept of Security Tetraplets: Verifiable origins of the function arguments in form of \(peer\_id, service\_id, function\_name, data\_getter\) tetraplets. This concept enables the secure composition of function calls with AIR scripts.
|
||||
|
||||
## Decouple the Authorization Service
|
||||
|
||||
Aquamarine, as a composability medium, needs to take care of many aspects of security to enable composing services of different vendors in a safe way. Let's consider the example of authorization service – a service that verifies permission:
|
||||
|
||||
```text
|
||||
// Pseudocode of a service interface
|
||||
service Auth:
|
||||
// Works only for the service creator
|
||||
def grant_permission(to_peer: PeerId)
|
||||
def check_permission(): bool
|
||||
```
|
||||
|
||||
The service contains all the data necessary to check that permission was granted to a given peer. That is, we have authentication and authorization logic.
|
||||
|
||||
Consider a simple Blog service with an authorization argument for writes, i.e. adding posts.
|
||||
|
||||
```text
|
||||
service Blog:
|
||||
def add_post(text: string, is_permitted: bool)
|
||||
def list_posts(): Post[]
|
||||
```
|
||||
|
||||
By decoupling the storage of posts from the user and permissions model, we add a lot of flexibility to our Blog service. Not only can we use it for, say,both personal and corporate blogs but also as a building block for more complex social interactions. Just remember, the blog service itself doesn't care about security guards, it just stores posts, that's all.
|
||||
|
||||
Let's write an AIR script that checks permissions and adds a new post where authNode is the peer running the auth service, authSrvId and blogNde is the peer hosting the blog service:
|
||||
|
||||
```text
|
||||
;; Script is slightly simplified for better readability
|
||||
;; Setting data is omitted
|
||||
(seq
|
||||
(call authNode (authSrvId "check_permission") [] token)
|
||||
(call blogNode (blogSrvId "add_post") [text token])
|
||||
)
|
||||
```
|
||||
|
||||
This is what we want to have but now let's see if we can poke holes in our security.
|
||||
|
||||
### First Try: Person in the Middle \(PITM/MITM\) Attack
|
||||
|
||||
In case check\_permission\(\) returns false, a PITM attacker intercepts the outgoing network package, takes the function output and attempts tp replace false with true. This attempt fails, however, as in Aquamarine every peer's ID is derived from its public key and every response is signed with the corresponding private key:
|
||||
|
||||
```rust
|
||||
let resp_signature = sign(particle.signature, srvId, fnName, argsHash, responseHash)
|
||||
```
|
||||
|
||||
Since only the private key holders can verifiably sign the output of a function call. Hence, attackers' attempts to change a function output or replay the output of a function call from another particle leads to particle rejection on blogNode.
|
||||
|
||||
### Second Try: Using The Wrong Service
|
||||
|
||||
Consider the following script where we set the token to true so that add\_post may assume that permission was actually given.
|
||||
|
||||
```text
|
||||
(seq
|
||||
(call %init_peer_id% ("" "get_true") [] token)
|
||||
(call blogNode (blogSrvId "add_post") [text token])
|
||||
)
|
||||
```
|
||||
|
||||
How could we overcome this potential breach? On blog service host, blogNode, the entire AIR script execution flow is verified. That is, the Aquamarine interpreter visits each instruction and checks whether the particle's data has the result of the execution of this instruction and, if it does, checks that it was done by the expected peer, service, function and with the expected arguments. This is verified by the argsHash signed within _resp\_signature_. So when the token is set to a value inside the Aquamarine interpreter, we know the origin of this data: a triplet of peerId, serviceId, functionName.
|
||||
|
||||
In our case, the data triplet is %init\_peer\_id%, "", "get\_true" but we expect authNode, authSrvId, "check\_permission" with some known constants for authNode, authSrvId as we know where we deployed the service. As the add\_post function checks this triplet along with the token argument, and will reject the particle. Hence, we failed to trick the system by fakking the argument's origin as only the Auth service is considered a valid source of truth for authorization tokens.
|
||||
|
||||
Our attack got thwarted again but we have a few more tricks up our sleeves.
|
||||
|
||||
### Third Try: Using The Wrong Piece Of Data
|
||||
|
||||
Let's make a more sophisticated AuthStatus service that provides more data associated with the current peer id:
|
||||
|
||||
```text
|
||||
struct Status:
|
||||
is_admin: bool
|
||||
is_misbehaving: bool
|
||||
|
||||
service AuthStatus:
|
||||
def get_status(): Status
|
||||
```
|
||||
|
||||
If this peer misbehaves, we set a special flag as follows:
|
||||
|
||||
```text
|
||||
;; Script is slightly simplified for better readability
|
||||
;; Setting data is omitted
|
||||
(seq
|
||||
(call authNode (authSrvId "get_status") [] status)
|
||||
(call blogNode (blogSrvId "add_post") [text status.$.is_admin])
|
||||
)
|
||||
```
|
||||
|
||||
So we pass an _is\_admin_ flag to the blogNode, as we now have a permissioned blog and all is well. Maybe.
|
||||
|
||||
The problem is that we can also pass the _is\_misbehaving_ flag to fake admin permissions and add a post. Consider other possible scenarios, where, for example, you could have a role in the status, as well as a nickname, and you need to distinguish the two, even though both are strings.
|
||||
|
||||
Recall that the origin of the result is stated with three values _peerId_, _serviceId_, _functionName_, while the origin of the argument is extended with one more attribute: the data getter. This forms a structure of four fields – the **tetraplet**:
|
||||
|
||||
```text
|
||||
struct SecurityTetraplet:
|
||||
peer_id: string
|
||||
service_id: string
|
||||
fn_name: string
|
||||
getter: string
|
||||
```
|
||||
|
||||
The Aquamarine interpreter provides this tetraplet along with each argument during the function call, which are checked by the service if deemed necessary. In fact, tetraplets are present for every argument as a vector of vectors of tetraplets:
|
||||
|
||||
```text
|
||||
pub tetraplets: Vec<Vec<SecurityTetraplet>>
|
||||
```
|
||||
|
||||
which is possible due to the use of accumulators in AIR and produced with the fold instruction. Usually, you don't need to care about them, and only the first, i.e. origin, tetraplet is set.
|
||||
|
||||
## Limitations Of The Authentication Approach
|
||||
|
||||
This strategy positions that only arguments should affect function behavior by decoupling the service from the AIR script and its input data. That is, the \(public\) service API is safe only by relying on exogenous permissions checking ascertaining that the security invariants have no access to the AIR script or input data.
|
||||
|
||||
### Only Arguments Affect The Function Execution
|
||||
|
||||
This API cannot be used safely:
|
||||
|
||||
```text
|
||||
service WrongAuth:
|
||||
def get_status_or_fail() // does not return if not authorized
|
||||
```
|
||||
|
||||
as _WrongAuth_ service cannot be used to provide the expected checks:
|
||||
|
||||
```text
|
||||
(seq
|
||||
(call authNode (authSrv "get_status_or_fail") []) ;; no return
|
||||
(call blogNode (blogSrv "add_post") [text]) ;; no data
|
||||
)
|
||||
```
|
||||
|
||||
In the above script, if _get\_status\_or\_fail_ fails, _add\_post_ never executes. But nothing prevents a user from calling _add\_post_ directly, so this design cannot be considered secure. That's why there must be an output from a security service to be provided as an argument later.
|
||||
|
||||
### Only Direct Dependencies Are Taken Into Account
|
||||
|
||||
Consider the modified WrongAuth, which takes the peer id as an argument:
|
||||
|
||||
```text
|
||||
service WrongAuth:
|
||||
def get_status(peer_id) // Status of the given peer
|
||||
```
|
||||
|
||||
In this case, a tetraplet can easily be verified that the input arguments are not compromised. However, what data is it? As arguments of _get\_status_ function are not a part of a tetraplet, we can't check that the right peer\_id was provided to the function. So from a design perspective, it is preferable for _get\_status_ to not have arguments, so that input cannot be altered.
|
||||
|
||||
What if we want to make the system secure in terms of tracking the data origin by taking the arguments into account? In this case, the verifier function _add\_post_ needs to know not only the name of the provider but also its structure, i.e., what inputs it has and, even worse, what the constraints of these inputs are and how to verify them. Since we cannot perform garbage collection easily, we need to express the model of the program, i..e., auth service and AIR script, on the verifier side.
|
||||
|
||||
This makes decomposition a pain: why decouple services if we need them to know so much about each other? That's why function calls in Aquamarine depend on the direct inputs, and direct inputs only.
|
||||
|
||||
**References**
|
||||
|
||||
* [Tetraplet implementation in the Aquamarine interpreter](https://github.com/fluencelabs/aquamarine/blob/master/crates/polyplets/src/tetraplet.rs)
|
||||
* [Example of checking tetraplets for authorization in Fluent Pad](https://github.com/fluencelabs/fluent-pad/blob/main/services/history-inmemory/src/service_api.rs#L91)
|
||||
* [Getting tetraplets with Rust SDK](https://github.com/fluencelabs/rust-sdk/blob/master/crates/main/src/call_parameters.rs#L35)
|
||||
|
@ -1,49 +0,0 @@
|
||||
# Tools
|
||||
|
||||
### Fluence Proto Distributor: FLDIST
|
||||
|
||||
\`\`[`fldist`](https://github.com/fluencelabs/proto-distributor) is a command line interface \(CLI\) to Fluence peers allowing for the lifecycle management of services and offers the fastest and most effective way to service deployment.
|
||||
|
||||
```text
|
||||
mbp16~(:|✔) % fldist --help
|
||||
Usage: fldist <cmd> [options]
|
||||
|
||||
Commands:
|
||||
fldist completion generate completion script
|
||||
fldist upload Upload selected wasm
|
||||
fldist get_modules Print all modules on a node
|
||||
fldist get_interfaces Print all services on a node
|
||||
fldist get_interface Print a service interface
|
||||
fldist add_blueprint Add a blueprint
|
||||
fldist create_service Create a service from existing blueprint
|
||||
fldist new_service Create service from a list of modules
|
||||
fldist deploy_app Deploy application
|
||||
fldist create_keypair Generates a random keypair
|
||||
fldist run_air Send an air script from a file. Send arguments to
|
||||
"returnService" back to the client to print them in the
|
||||
console. More examples in "scripts_examples" directory.
|
||||
fldist env show nodes in currently selected environment
|
||||
|
||||
Options:
|
||||
--help Show help [boolean]
|
||||
--version Show version number [boolean]
|
||||
-s, --seed Client seed [string]
|
||||
--env Environment to use
|
||||
[required] [choices: "dev", "testnet", "local"] [default: "testnet"]
|
||||
--node-id, --node PeerId of the node to use
|
||||
--node-addr Multiaddr of the node to use
|
||||
--log log level
|
||||
[required] [choices: "trace", "debug", "info", "warn", "error"] [default:
|
||||
"error"]
|
||||
--ttl particle time to live in ms
|
||||
[number] [required] [default: 60000]
|
||||
```
|
||||
|
||||
### Fluence JS SDK
|
||||
|
||||
The Fluence [JS SDK](https://github.com/fluencelabs/fluence-js) supports developers to build full-fledged applications for a variety of targets ranging from browsers to backend apps and greatly expands on the `fldist` capabilities.
|
||||
|
||||
### Marine Tools
|
||||
|
||||
Marine offers multiple tools including the Marine CLI, REPL and SDK. Please see the [Marine section](knowledge_aquamarine/marine/) for more detail.
|
||||
|
13
node.md
13
node.md
@ -1,13 +0,0 @@
|
||||
# Node
|
||||
|
||||
The Fluence protocol is implemented as the Fluence [reference node](https://github.com/fluencelabs/fluence) which includes the
|
||||
|
||||
* Peer-to-peer communication layer
|
||||
* Marine interpreter
|
||||
* Aqua VM
|
||||
* Builtin services
|
||||
|
||||
and more.
|
||||
|
||||
Builtin services are available on every Fluence peer and can be programmatically accessed and composed using Aqua. For a complete list of builtin services see the builtin.aqua file in the [Aqua Lib](https://github.com/fluencelabs/aqua-lib) repo. How to create your own builtin service, see the [Add Your Own Builtins](tutorials_tutorials/add-your-own-builtin.md) tutorial.
|
||||
|
@ -1,11 +0,0 @@
|
||||
# Node
|
||||
|
||||
The Fluence protocol is implemented as the Fluence [reference node](https://github.com/fluencelabs/fluence) which includes the
|
||||
|
||||
* Peer-to-peer communication layer
|
||||
* Marine interpreter
|
||||
* Aqua VM
|
||||
* Builtin services
|
||||
|
||||
and more.
|
||||
|
@ -1,403 +0,0 @@
|
||||
# Builtin Services
|
||||
|
||||
## Overview
|
||||
|
||||
Each Fluence node is equipped with a set of "builtin" services that can be called from Aquamarine and fall into the following namespaces:
|
||||
|
||||
1. _peer_ - operations related to connectivity or state of a given peer
|
||||
2. _kad_ - Kademlia API
|
||||
3. _srv_ – management and information about services on a node
|
||||
4. _dist_ – distribution and inspection of modules and blueprints
|
||||
5. _script_ – to manage recurring scripts
|
||||
6. _op_ – basic operations on data deprecated - namespace for deprecated API Below is the reference documentation for all the existing built-in services. Please refer to the JS SDK documentation to learn how to easily use them from the JS SDK
|
||||
7. _deprecated_ - namespace for deprecated API
|
||||
|
||||
Please note that the [`fldist`](../knowledge_tools.md#fluence-proto-distributor-fldist) CLI tool, as well as the [JS SDK](../knowledge_tools.md#fluence-js-sdk), provide access to node-based services.
|
||||
|
||||
## API
|
||||
|
||||
### peer is\_connected
|
||||
|
||||
Checks if there is a direct connection to the peer identified by a given PeerId
|
||||
|
||||
* **Arguments**:
|
||||
* PeerId – id of the peer to check if there's a connection with
|
||||
* **Returns**: bool - true if connected to the peer, false otherwise
|
||||
|
||||
Example of a service call:
|
||||
|
||||
```scheme
|
||||
(call node ("peer" "is_connected") ["123D..."] ok)
|
||||
```
|
||||
|
||||
Initiates a connection to the specified peer
|
||||
|
||||
* **Arguments**
|
||||
* _PeerId_ – id of the target peer
|
||||
* [_Multiaddr_](https://crates.io/crates/multiaddr) – an array of target peer's addresses
|
||||
* **Returns**: bool - true if connection was successful
|
||||
|
||||
Example of a service call:
|
||||
|
||||
```scheme
|
||||
(seq
|
||||
(call node ("op" "identity") ["/ip4/1.2.3.4/tcp/7777" "/ip4/1.2.3.4/tcp/9999"] addrs)
|
||||
(call node ("peer" "connect") ["123D..." addrs] ok)
|
||||
)
|
||||
```
|
||||
|
||||
### peer get\_contact
|
||||
|
||||
Resolves the contact of a peer via [Kademlia](https://en.wikipedia.org/wiki/Kademlia)
|
||||
|
||||
* **Arguments**
|
||||
* _PeerId_ – id of the target peer
|
||||
* **Returns**: Contact - true if connection was successful
|
||||
|
||||
```rust
|
||||
// get_contact return struct
|
||||
Contact {
|
||||
peer_id: PeerId,
|
||||
addresses: [Multiaddr]
|
||||
}
|
||||
```
|
||||
|
||||
Example of a service call:
|
||||
|
||||
```scheme
|
||||
(call node ("peer" "get_contact") ["123D..."] contact)
|
||||
```
|
||||
|
||||
#### peer identify
|
||||
|
||||
Get information about the peer
|
||||
|
||||
* **Arguments**: None
|
||||
* **Returns:** _external address_
|
||||
|
||||
```javascript
|
||||
{ "external_addresses": [ "/ip4/1.2.3.4/tcp/7777", "/dns4/stage.fluence.dev/tcp/19002" ] }
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("peer" "identify") [] info) peer timestamp_ms
|
||||
```
|
||||
|
||||
### peer timestamp\_ms
|
||||
|
||||
Get Unix timestamp in milliseconds
|
||||
|
||||
* **Arguments**: None
|
||||
* **Returns**: _u128_ - number of milliseconds since 1970
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("peer" "timestamp_ms") [] ts_ms)
|
||||
```
|
||||
|
||||
### peer timestamp\_sec
|
||||
|
||||
Get Unix timestamp in seconds
|
||||
|
||||
* **Arguments**: None
|
||||
* **Returns**: _u64_ - number of seconds since 1970
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("peer" "timestamp_sec") [] ts_sec)
|
||||
```
|
||||
|
||||
### kad neighborhood
|
||||
|
||||
Instructs node to return the locally-known nodes in the Kademlia neighborhood for a given key
|
||||
|
||||
* **Arguments**: _key_ – the peer ID \(PeerId\) of the node
|
||||
* **Returns**: _peers_ – an array of PeerIds of the nodes that are in the Kademlia neighborhood for the given hash\(key\)
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("dht" "neighborhood") [key] peers)
|
||||
```
|
||||
|
||||
Please note that this service does _not_ traverse the network and may yield incomplete neighborhood.
|
||||
|
||||
### srv create
|
||||
|
||||
Used to create a service on a certain node.
|
||||
|
||||
* **Arguments**:
|
||||
* blueprint\_id – ID of the blueprint that has been added to the node specified in the service call by the dist add\_blueprint service.
|
||||
* **Returns**: service\_id – the service ID of the created service.
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("srv" "create") [blueprint_id] service_id)
|
||||
```
|
||||
|
||||
### srv list
|
||||
|
||||
Used to enumerate services deployed to a peer.
|
||||
|
||||
* **Arguments**: None
|
||||
* **Returns**: a list of services running on a peer
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("srv" "list") [] services)
|
||||
```
|
||||
|
||||
### srv add\_alias
|
||||
|
||||
Adds an alias on service, so service could be called not only by service\_id but by alias.
|
||||
|
||||
* **Argument**: alias - settable service name service\_id – ID of the service whose interface you want to name.
|
||||
* **Returns**: alias id
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("srv" "add_alias") [alias service_id])
|
||||
```
|
||||
|
||||
### srv get\_interface
|
||||
|
||||
Retrieves the functional interface of a service running on the node specified in the service call.
|
||||
|
||||
* Argument: service\_id – ID of the service whose interface you want to retrieve.
|
||||
* Returns : an interface object of the following structure:
|
||||
|
||||
```typescript
|
||||
{
|
||||
interface: { function_signatures, record_types },
|
||||
blueprint_id: "uuid-1234...",
|
||||
service_id: "uuid-1234..."
|
||||
}
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("srv" "get_interface") [service_id] interface)
|
||||
```
|
||||
|
||||
### dist add\_module
|
||||
|
||||
Used to add modules to the node specified in the service call.
|
||||
|
||||
* Arguments:
|
||||
|
||||
* bytes – a base64 string containing the .wasm module to add.
|
||||
* config – an object of the following structure
|
||||
|
||||
```javascript
|
||||
{
|
||||
"name": "my_module_name"
|
||||
}
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("dist" "add_module") [bytes config] hash)
|
||||
```
|
||||
|
||||
### dist list\_modules
|
||||
|
||||
Get a list of modules available on the node
|
||||
|
||||
* Arguments: None
|
||||
* Returns: an array of objects containing module descriptions
|
||||
|
||||
```javascript
|
||||
[
|
||||
{
|
||||
"name": "moduleA",
|
||||
"hash": "6ebff28c",
|
||||
"config": { "name": "moduleA" }
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("dist" "list_modules") [] modules)
|
||||
```
|
||||
|
||||
### dist get\_module\_interface
|
||||
|
||||
Get the interface of a module
|
||||
|
||||
* Arguments: hash of a module
|
||||
* Returns: an interface of the module \( see _srv get\_interface \)_ mple of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("dist" "get_interface") [hash] interface)
|
||||
```
|
||||
|
||||
### dist add\_blueprint
|
||||
|
||||
Used to add a blueprint to the node specified in the service call.
|
||||
|
||||
* Arguments: blueprint – an object of the following structure
|
||||
|
||||
```javascript
|
||||
{
|
||||
"name": "good_service",
|
||||
"dependencies": [ "hash:6ebff28c...", "hash:1e59875a...", "hash:d164a07..." ]
|
||||
}
|
||||
```
|
||||
|
||||
```text
|
||||
Where module dependencies are specified as [_blake3_](https://crates.io/crates/blake3) hashes of modules
|
||||
```
|
||||
|
||||
* Returns: Generated blueprint id
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("dist" "add_blueprint") [blueprint] blueprint_id)
|
||||
```
|
||||
|
||||
### dist list\_blueprints
|
||||
|
||||
Used to get the blueprints available on the node specified in the service call.
|
||||
|
||||
* Arguments: None
|
||||
* Returns: an array of blueprint structures.
|
||||
|
||||
A blueprint is an object of the following structure:
|
||||
|
||||
```javascript
|
||||
{
|
||||
"id": "uuid-1234-...",
|
||||
"name": "good_service",
|
||||
"dependencies": [ "hash:6ebff28c...", "hash:1e59875a...", "hash:d164a07..." ]
|
||||
}
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("dist" "list_blueprints") [] blueprints)
|
||||
```
|
||||
|
||||
#### script add
|
||||
|
||||
Adds a given script to a node. That script will be called with a fixed interval with the default setting at approx. three \(3\) seconds.
|
||||
|
||||
Recurring scripts can't read variables from data, they must be literal. That means that every address or value must be specified as a literal: \(call "QmNode" \("service\_id-1234-uuid" "function"\) \["arg1" "arg2"\]\).
|
||||
|
||||
* Arguments:
|
||||
* _script_ – a string containing "literal" script
|
||||
* _interval_ – an optional string containing interval in seconds. If set, the script will be executed periodically at that interval. If omitted, the script will be executed only once. All intervals are rounded to 3 seconds. The minimum interval is 3 seconds.
|
||||
* Returns: uuid – script id that can be used to remove that script
|
||||
|
||||
Example of service call:
|
||||
|
||||
* Without an interval parameter value, the script executes once:
|
||||
|
||||
```text
|
||||
(call node ("script" "add") [script] id)
|
||||
```
|
||||
|
||||
* With an interval parameter value _k_ passed as a string, the script executes every _k_ seconds \(21 in this case\)
|
||||
|
||||
```scheme
|
||||
(call node ("script" "add") [script "21"] id)
|
||||
```
|
||||
|
||||
### script remove
|
||||
|
||||
Removes recurring script from a node. Only a creator of the script can delete it
|
||||
|
||||
* Arguments: _script id_ \(as received from _script add_\)
|
||||
* Returns: true if the script was deleted and false otherwise
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("script" "remove") [script_id] result)
|
||||
```
|
||||
|
||||
### script list
|
||||
|
||||
* Arguments: None
|
||||
* Returns: A list of existing scripts on the node. Each object in the list is of the following structure:
|
||||
|
||||
```javascript
|
||||
{
|
||||
"id": "uuid-1234-...",
|
||||
"src": "(seq (call ...", //
|
||||
"failures": 0,
|
||||
"interval": "21s",
|
||||
"owner": "123DKooAbcEfgh..."
|
||||
}
|
||||
```
|
||||
|
||||
Example of a service call:
|
||||
|
||||
```scheme
|
||||
(call node ("script" "list") [] list)
|
||||
```
|
||||
|
||||
### op identity
|
||||
|
||||
Acts as an identity function. This service returns exactly what was passed to it. Useful for moving the execution of some service topologically or for extracting some data and putting it into an output variable.
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("op" "identity") [args] result)
|
||||
```
|
||||
|
||||
### deprecated add\_provider
|
||||
|
||||
Used in service aliasing. \_\*\*\_Stores the specified service provider \(provider\) in the internal storage of the node indicated in the service call and associates it with the given key \(key\). After executing add\_provider, the provider can be accessed via the get\_providers service using this key.
|
||||
|
||||
* Arguments:
|
||||
|
||||
* key – a string; usually, it is a human-readable service alias.
|
||||
* provider – the location of the service. It is an object of the following structure:
|
||||
|
||||
```javascript
|
||||
{
|
||||
"peer": "123D...", // PeerId of some peer in the network
|
||||
"service_id": "uuid-1234-..." // Optional service_id of the service running on the peer specified by peer
|
||||
}
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("deprecated" "add_provider") [key provider])
|
||||
```
|
||||
|
||||
### deprecated get\_providers
|
||||
|
||||
Used in service aliasing to retrieve providers for a given key.
|
||||
|
||||
* Arguments: _key_ – a string; usually, it is a human-readable service alias.
|
||||
* Returns: an array of objects of the following structure:
|
||||
|
||||
```javascript
|
||||
{
|
||||
"peer": "123D...", // required field
|
||||
"service_id": "uuid-1234-..." // optional field
|
||||
}
|
||||
```
|
||||
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
(call node ("deprecated" "get_providers") [key] providers)
|
||||
```
|
||||
|
75
p2p.md
75
p2p.md
@ -1,75 +0,0 @@
|
||||
# Thinking In Aquamarine
|
||||
|
||||
Permissionless peer-to-peer networks have a lot to offer to developers and solution architects such as decentralization, control over data, improved request-response data models and zero trust security at the application and service level. Of course, these capabilities and benefits don't just arise from putting [libp2p](https://libp2p.io/) to work. Instead, a peer-to-peer overlay is required. The Fluence protocol provides such an overlay enabling a powerful distributed data routing and management protocol allowing developers to implement modern and secure Web3 solutions. See Figure 1 for a stylized representation decentralized applications development by programming the composition of services distributed across a peer-to-peer network.
|
||||
|
||||
Figure 1: Decentralized Applications Composed From Distributed Services On P2P Nodes
|
||||
|
||||
![](https://i.imgur.com/XxC7NN3.png)
|
||||
|
||||
## Aquamarine
|
||||
|
||||
As a complement to the protocol, Fluence provides the Aquamarine stack aimed at enabling developers to build high-quality, high-performance decentralized applications. Aquamarine is purpose-built to ease the programming demands commonly encountered in distributed, and especially peer-to-peer, development and is comprised of Aqua and Marine.
|
||||
|
||||
[Aqua](https://doc.fluence.dev/aqua-book/), is a new generation programming language allowing developers to program peer-to-peer networks and compose distributed services hosted on peer-to-peer nodes into decentralized applications and backends. Marine, on the other hand, provides the necessary Wasm runtime environment on peers to facilitate the execution of compiled Aqua code.
|
||||
|
||||
A major contribution of Aquamarine is that network and application layer, i.e., [Layer 3 and Layer 7](https://en.wikipedia.org/wiki/OSI_model), programming is accessible to developers as a seamless and ergonomic composition-from-services experience in Aqua thereby greatly reducing, if not eliminating, common barriers to distributed and decentralized application development.
|
||||
|
||||
## **Improved Request-Response Model**
|
||||
|
||||
In some network models, such as client server, the request-response model generally entails a response returning to the request client. For example, a client application tasked to conduct a credit check of a customer and to inform them with a SMS typically would call a credit check API, consume the response, and then call a SMS API to send the necessary SMS.
|
||||
|
||||
Figure 2: Client Server Request Response Model
|
||||
|
||||
![](.gitbook/assets/image%20%2811%29.png)
|
||||
|
||||
The Fluence peer-to-peer protocol, on the other hand, allows for a much more effective Request-Response processing pattern where responses are forward-chained to the next consuming service\(s\) without having to make the return trip to the client. See Figure 3.
|
||||
|
||||
Figure 3: Fluence P2P Protocol Request Response Model
|
||||
|
||||
![](.gitbook/assets/image%20%2810%29.png)
|
||||
|
||||
In a Fluence p2p implementation, our client application would call a credit check API deployed or proxy-ed on some peer and then send the response directly to the SMS API service possibly deployed on another peer -- similar to the flow depicted in Figure 1.
|
||||
|
||||
Such a significantly flattened request-response model leads to much lower resource requirements for applications in terms of bandwidth and processing capacity thereby enabling a vast class of "thin" clients ranging from browsers to IoT and edge devices truly enabling decentralized machine-to-machine communication.
|
||||
|
||||
## **Zero Trust Security**
|
||||
|
||||
The [zero trust security model](https://en.wikipedia.org/wiki/Zero_trust_security_model) assumes the worst, i.e., a breach, at all times and proposes a "never trust, always verify" approach. This approach is inherent in the Fluence peer-to-peer protocol and Aqua programming model as every service request can be authenticated at the service API level. That is, every service exposes functions which may require authentication and authorization. Aquamarine implements SecurityTetraplets as verifiable origins of the function arguments to enable fine-grained authorization.
|
||||
|
||||
## Service Granularity And Redundancy
|
||||
|
||||
Services are not capable to accept more than one request at any given time. Consider a service, FooBar, comprised of two functions, foo\(\) and bar\(\) where foo is a longer running function.
|
||||
|
||||
```text
|
||||
-- Stylized FooBar service with two functions
|
||||
-- foo() and bar()
|
||||
-- foo is long-running
|
||||
--- if foo is called before bar, bar is blocked
|
||||
service FooBar("service-id"):
|
||||
bar() -> string
|
||||
foo() -> string --< long running function
|
||||
|
||||
func foobar(node:string, service_id:string, func_name:string) -> string:
|
||||
res: *string
|
||||
on node:
|
||||
BlockedService service_id
|
||||
if func_name == "foo":
|
||||
res <- BlockedService.foo()
|
||||
else:
|
||||
res <- BlockedService.bar()
|
||||
<- res!
|
||||
```
|
||||
|
||||
As long as foo\(\) is running, the entire FooBar service, including bar\(\), is blocked. This has implications with respect to both service granularity and redundancy, where service granularity captures to number of functions per service and redundancy refers to the number of service instances deployed to different peers.
|
||||
|
||||
|
||||
## Summary
|
||||
|
||||
Programming distributed applications on the Fluence protocol with Aquamarine unlocks significant benefits from peer-to-peer networks while greatly easing the design and development processes. Nevertheless, a mental shift concerning peer-to-peer solution design and development process is required. Specifically, the successful mindset accommodates
|
||||
|
||||
* an application architecture based on the composition of distributed services across peer-to-peer networks by decoupling business logic from application workflow
|
||||
* a services-first approach with respect to both the network and application layer allowing a unified network and application programming model encapsulated by Aqua
|
||||
* a multi-layer security approach enabling zero-trust models at the service level
|
||||
* a flattened request-response model enabling data free from centralized control
|
||||
* a services architecture with respect to granularity and redundancy influenced by service function runtime
|
||||
|
@ -1,2 +0,0 @@
|
||||
# Quick Start
|
||||
|
@ -1,112 +0,0 @@
|
||||
# Quick Start
|
||||
|
||||
The Fluence solution enables a new class of decentralized Web3 solutions providing technical, security and business benefits otherwise not available. In order for solution architects and developers to realize these benefits, a shift in philosophy and implementation is required. With the Fluence tool chain available, developers should find it possible to code meaningful Web3 solutions in short order once an understanding of the core concepts and Aqua is in place.
|
||||
|
||||
The remainder of this section introduces the core concepts underlying the Fluence solution.
|
||||
|
||||
**Particles**
|
||||
|
||||
Particles are Fluence's secure distributed state medium, i.e., conflict free replication data structures containing application data, workflow scripts and some metadata, that traverse programmatically specified routes in a highly secure manner. That is, _particles_ hop from distributed compute service to distributed compute service across the peer-to-peer network as specified by the application workflow updating along the way.
|
||||
|
||||
Figure 4: Node-Service Perspective Of Particle Workflow ![](https://i.imgur.com/u4beJgh.png)
|
||||
|
||||
Not surprisingly, particles are an integral part of the Fluence protocol and stack. It is the very decoupling of data + workflow instructions from the service and network components that allows the secure composition of applications from services distributed across a permissionless peer-to-peer network.
|
||||
|
||||
While the application state change resulting from passing a particle "through" a service with respect of the data components is quite obvious, the ensuing state change with respect to the workflow also needs to be recognized, which is handled by the Aqua VM.
|
||||
|
||||
As depicted in Figure 4, a particle traverses to a destination node's Aqua VM where the next execution step is evaluated and, if specified, triggered. That is, the service programmatically specified to operate on the particle's data is called from the Aqua VM, the particle's data and workflow \(step\) are updated and the Aqua VM routes the particle to the next specified destination, which may be on the same, another or the client peer.
|
||||
|
||||
**Aqua**
|
||||
|
||||
An integral enabler of the Fluence solution is Aqua, an open source language purpose-built to enable developers to ergonomically program distributed networks and applications by composition. Aqua scripts compile to an intermediary representation, called AIR, which execute on the Aqua Virtual Machine, Aqua VM, itself a Wasm module hosted on the Marine interpreter on every peer node.
|
||||
|
||||
Figure 5: From Aqua Script To Particle Execution
|
||||
|
||||
![](../.gitbook/assets/image%20%286%29.png)
|
||||
|
||||
Currently, compiled Aqua scripts can be executed from Typescript clients based on [Fluence SDK](https://github.com/fluencelabs/fluence-js). For more information about Aqua, see the [Aqua book](https://doc.fluence.dev/aqua-book/).
|
||||
|
||||
**Marine**
|
||||
|
||||
Marine is Fluence's generalized Wasm runtime executing Wasm Interface Type \(IT\) modules with Aqua VM compatible interfaces on each peer. Let's unpack.
|
||||
|
||||
Services behave similarly to microservices: They are created on nodes and served by the Marine VM and can _only_ be called by Aqua VM. They also are passive in that they can accept incoming calls but can't initiate an outgoing request without being called.
|
||||
|
||||
Services are
|
||||
|
||||
* comprised of Wasm IT modules that can be composed into applications
|
||||
* developed in Rust for a wasm32-wasi compile target
|
||||
* deployed on one or more nodes
|
||||
* running on the Marine VM which is deployed to every node
|
||||
|
||||
Figure 6: Stylized Execution Flow On Peer
|
||||
|
||||
![](../.gitbook/assets/image%20%285%29.png)
|
||||
|
||||
Please note that the Aqua VM is itself a Wasm module running on the Marine VM.
|
||||
|
||||
The [Marine Rust SDK](https://github.com/fluencelabs/marine-rs-sdk) abstracts the Wasm IT implementation detail behind a handy macro that allows developers to easily create Marine VM compatible Wasm modules. In the example below, applying the `marine` macro turns a basic Rust function into a Wasm IT compatible function and enforces the types requirements at the compiler level.
|
||||
|
||||
```rust
|
||||
#[marine]
|
||||
pub fn greeting(name: String) -> String {
|
||||
format!("Hi, {}", name)
|
||||
}
|
||||
```
|
||||
|
||||
**Service Creation**
|
||||
|
||||
Services are logical constructs instantiated from Wasm modules that contain some business logic and configuration data. That is, services are created, i.e., linked, at the Marine VM runtime level from uploaded Wasm modules and the relevant metadata
|
||||
|
||||
_Blueprints_ are json documents that provide the necessary information to build a service from the associated Wasm modules.
|
||||
|
||||
Figure 7: Service Composition and Execution Model
|
||||
|
||||
![](../.gitbook/assets/image%20%287%29.png)
|
||||
|
||||
Services section that services are not capable to accept more than one request at a given time.
|
||||
|
||||
**Modules**
|
||||
|
||||
In the Fluence solution, Wasm IT modules take one of three forms:
|
||||
|
||||
* Facade Module: expose the API of the service comprised of one or more modules. Every service has exactly one facade module
|
||||
* Pure Module: perform computations without side-effects
|
||||
* Effector Module: perform at least one computation with a side-effect
|
||||
|
||||
It is important for architects and developers to be aware of their module and services construction with respect to state changes.
|
||||
|
||||
**Authentication And Permissioning**
|
||||
|
||||
Authentication at the service API level is an inherent feature of the Fluence solution. This fine-grained approach essentially provides [ambient authority](https://en.wikipedia.org/wiki/Ambient_authority) out of the box.
|
||||
|
||||
In the Fluence solution, this is accomplished by a SecurityTetraplet, which is data structure with four data fields:
|
||||
|
||||
```rust
|
||||
struct SecurityTetraplet:
|
||||
peer_id: string
|
||||
service_id: string
|
||||
fn_name: string
|
||||
getter: string
|
||||
```
|
||||
|
||||
SecurityTetraplets are provided with the function call arguments for each \(service\) function call and are checked by the called service. Hence, authentication based on the **\(service caller id == service owner id\)** relation can be established at service ingress and leveraged to build powerful, fine-grained identity and access management solutions enabling true zero trust architectures.
|
||||
|
||||
**Trust Layer**
|
||||
|
||||
Since we're not really ready, should we cut this section?
|
||||
|
||||
The Fluence protocol offers an alternative to node selection, i.e. connection and permissioning, approaches, such as Kademlia, called TrustGraph. A TrustGraph is comprised of subjectively weights assigned to nodes to manage peer connections. TrustGepahs are node operator specific and transitive. That is, a trusted node's trusted neighbors are considered trustworthy.
|
||||
|
||||
**Scaling Apps**
|
||||
|
||||
As discussed previously, decoupling at the network and business logic levels is at the core of the Fluence protocol and provides the major entry points for scaling solutions.
|
||||
|
||||
At the peer-to-peer network level, scaling can be achieved with subnetworks. Subnetworks are currently under development and we will update this section in the near future.
|
||||
|
||||
At the service level, we can achieve scale through parallelization due to the decoupling of resource management from infrastructure. That is, seqential and parallel execution flow logic are an inherent part of Aqua's programming model. In order to be able to achieve concurrency, the target services need to be available on multiple peers as module calls are blocking.
|
||||
|
||||
Figure 8: Stylized Par Execution
|
||||
|
||||
![](../.gitbook/assets/image%20%288%29.png)
|
||||
|
@ -1,10 +0,0 @@
|
||||
# Adding A Storage Service
|
||||
|
||||
So far, all of the modules we have used were stateless and we did not have to give security much thought. If application A and application B both use our curl service, we don't run into any kind of conflict other than maybe scheduling. Not so for stateful services like a database. If we take no precautions with respect to authorization, other users of the service may, for example, alter or delete our data. And that's definitely not what we want especially if we plan on monetizing that data.
|
||||
|
||||
Without diving too deep into the Fluence security framework, you should be aware that Fluence has an out-of-the box authentication primitive that allows a `user == owner` check not unlike what we've seen in various blockchain platforms or plain old _sudo_. Of course, the Fluence security framework goes much further and affords developers a great deal of flexibility to code a security solution to their needs. For the purpose of our tutorial, however, we'll stick with the built-in authentication and use it as a base for [ambient authority](https://en.wikipedia.org/wiki/Ambient_authority). That is, we use authentication as an authorization guard for select functions at development time and provide the necessary credentials at service call time.
|
||||
|
||||
For the purposes of this tutorial, there is a caveat you need to keep in mind: Every reader of this document inevitably ends up using the same sample service with the same ownership control. In the highly, highly unlikely event you're getting funky results, it's most likely due to someone else doing the very same tutorial at the very same time. \[Jinx\]\([https://en.wikipedia.org/wiki/Jinx\_\(game](https://en.wikipedia.org/wiki/Jinx_%28game)\)\) ! Buy me a Coke, drink the Coke, slowly, try again and you should be fine.
|
||||
|
||||
The next sections explore both the setup and the use of our database: Sqlite as a Service.
|
||||
|
@ -1,299 +0,0 @@
|
||||
# CRUD All the Way
|
||||
|
||||
It's finally time to populate our shiny new storage as a service with some data. In order to suss out our system, let's run an AIR smoketest. Save the script below to ethqlite\_roundtrip.clj.
|
||||
|
||||
```text
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_latest_block") [api_key] hex_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [hex_block_result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_2 (service_2 "hex_to_int") [hex_block_result] int_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [int_block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_block") [api_key int_block_result] block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "update_reward_blocks") [block_result] insert_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [insert_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "get_latest_reward_block") [] select_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "get_reward_block") [int_block_result] select_result_2)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result_2])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "get_miner_rewards") [select_result_2.$.["block_miner"]!] select_result_3)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result_3])
|
||||
)
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
The top part of the script is identical what we used before:
|
||||
|
||||
* get the latest block number \(`get_latest_block`\),
|
||||
* convert the hex string to an integer \(`hex_to_int`\) and
|
||||
* retrieve the reward block info \(`get_block`\)
|
||||
|
||||
The new service components called are:
|
||||
|
||||
* _update\_reward\_blocks_, which takes the `get_block` output and writes it to the db. Please note that this service requires authentication,
|
||||
* _get\_latest\_reward\_block_, which is a read operation querying the most recent row in the reward block table,
|
||||
* _get\_reward\_block_, which takes a miner address and in this cae the one produced by `get_block`, and finally
|
||||
* _get\_miner\_rewards_, which returns a list of miner rewards for a particular miner address; in this case, the one provided by the `get_reward_block` result. Note the `$` operator to access the `block_miner` field in the return struct and the `!` operator to flatten the response
|
||||
|
||||
From the previous section we know that
|
||||
|
||||
* service\_1: 74d5c5da-4c83-4af9-9371-2ab5d31f8019 , node\_1: 12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H
|
||||
* service\_2: 285e2a5e-e505-475f-a99d-15c16c7253f9 , node\_2: 12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH
|
||||
* service\_sqlite : 506528d3-3aaf-4ef5-a97d-18f1654fcf8d , node\_sqlite: 12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH
|
||||
* api\_key: you Etherscan API key
|
||||
* client seed: Dq3rsUZUs25FGrZM3qpiUzyKJ3NFgtqocgGRqWq9YGsx
|
||||
|
||||
which allows us to construct the payload for our `fldist` command-line app:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH run_air -p air-scripts/ethqlite_roundtrip.clj -d '{"service_1":"74d5c5da-4c83-4af9-9371-2ab5d31f8019", "node_1":"12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H","service_2":"285e2a5e-e505-475f-a99d-15c16c7253f9", "node_2": "12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH", "sqlite_service":"506528d3-3aaf-4ef5-a97d-18f1654fcf8d", "sqlite_node":"12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH", "api_key": "your api key"}' -s Dq3rsUZUs25FGrZM3qpiUzyKJ3NFgtqocgGRqWq9YGsx
|
||||
```
|
||||
|
||||
and upon execution gives us the expected results flow:
|
||||
|
||||
```bash
|
||||
===================
|
||||
[
|
||||
"0xb736bc"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H',
|
||||
service_id: '74d5c5da-4c83-4af9-9371-2ab5d31f8019',
|
||||
function_name: 'get_latest_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
12007100
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '285e2a5e-e505-475f-a99d-15c16c7253f9',
|
||||
function_name: 'hex_to_int',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
"{\"status\":\"1\",\"message\":\"OK\",\"result\":{\"blockNumber\":\"12007100\",\"timeStamp\":\"1615330221\",\"blockMiner\":\"0x04668ec2f57cc15c381b461b9fedab5d451c8f7f\",\"blockReward\":\"3799386136990487274\",\"uncles\":[],\"uncleInclusionReward\":\"0\"}}"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H',
|
||||
service_id: '74d5c5da-4c83-4af9-9371-2ab5d31f8019',
|
||||
function_name: 'get_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
{
|
||||
"err_str": "",
|
||||
"success": 1
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '506528d3-3aaf-4ef5-a97d-18f1654fcf8d',
|
||||
function_name: 'update_reward_blocks',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
{
|
||||
"block_miner": "\"0x04668ec2f57cc15c381b461b9fedab5d451c8f7f\"",
|
||||
"block_number": 12007100,
|
||||
"block_reward": "3799386136990487274",
|
||||
"timestamp": 1615330221
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '506528d3-3aaf-4ef5-a97d-18f1654fcf8d',
|
||||
function_name: 'get_latest_reward_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
{
|
||||
"block_miner": "\"0x04668ec2f57cc15c381b461b9fedab5d451c8f7f\"",
|
||||
"block_number": 12007100,
|
||||
"block_reward": "3799386136990487274",
|
||||
"timestamp": 1615330221
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '506528d3-3aaf-4ef5-a97d-18f1654fcf8d',
|
||||
function_name: 'get_reward_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
{
|
||||
"miner_address": "\"0x04668ec2f57cc15c381b461b9fedab5d451c8f7f\"",
|
||||
"rewards": [
|
||||
"3799386136990487274"
|
||||
]
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '506528d3-3aaf-4ef5-a97d-18f1654fcf8d',
|
||||
function_name: 'get_miner_rewards',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
Feel free to re-run the workflow without the `-s` flag. Before we conclude, let's cleanup our digital footprint and reset the Sqlite service for other tutorial users.
|
||||
|
||||
```text
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "owner_nuclear_reset") [] result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
save the script to _ethqlite\_reset.clj_ and run with:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH run_air -p air-scripts/ethqlite_reset.clj -d '{"service": "506528d3-3aaf-4ef5-a97d-18f1654fcf8d", "node_1": "12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH"}' -s Dq3rsUZUs25FGrZM3qpiUzyKJ3NFgtqocgGRqWq9YGsx
|
||||
```
|
||||
|
||||
to get
|
||||
|
||||
```bash
|
||||
[
|
||||
1
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '506528d3-3aaf-4ef5-a97d-18f1654fcf8d',
|
||||
function_name: 'owner_nuclear_reset',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
```
|
||||
|
||||
Of course, if you tried that without the `-s` flag, the return result would be 0. Try re-running the _ethqlite\_roundtrip.clj_ script after reset and without initialization.
|
||||
|
||||
In summary, you have extended the multi-service application with a Sqlite database service by simply adding the service methods to your existing workflow. We further extended the workflow by adding read queries which of course could be run separately or even as a separate application that may include a service-side paywall for data retrieval.
|
||||
|
@ -1,161 +0,0 @@
|
||||
# Setting Up
|
||||
|
||||
For our database solution we have two goals in mind:
|
||||
|
||||
1. Persist reward block information and
|
||||
2. Allow for the querying of the database
|
||||
|
||||
while maintaining not inly data integrity but also access control, which implies that we need some sort of authentication and authorization scheme that allows us to nominate and process
|
||||
|
||||
Again using the [Fluence Dashboard](https://dash.fluence.dev) we can find for a Sqlite database service solution, such as this [service](https://dash.fluence.dev/blueprint/b67afc58ed7d15757303b60e694ed5083faedb466b7cc36242fa0979d4f8b1b7), which gives us:
|
||||
|
||||
* service id: 506528d3-3aaf-4ef5-a97d-18f1654fcf8d and
|
||||
* node id: 12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH
|
||||
|
||||
As with most stateful solutions, we need to take care of a little pre-work to set up the database, e.g., create tables, insert default values, etc., which requires us to fire a one time initialization call. But before we get to the service initialization, let's write a small script to check whether we have the necessary owner privileges and we do that with an AIR script:
|
||||
|
||||
```text
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node (service "am_i_owner") [] result)
|
||||
; (call node_1 (service "get_tplet") [] result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
This should look rather familiar as we're calling a am\_i\_owner method of the remote service to see if `owner == us` using the familiar `fldist` tool:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH run_air -p air-scripts/ethqlite_owner.clj -d '{"service": "506528d3-3aaf-4ef5-a97d-18f1654fcf8d", "node_1": "12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH"}'
|
||||
```
|
||||
|
||||
which yields
|
||||
|
||||
```bash
|
||||
[
|
||||
0
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '506528d3-3aaf-4ef5-a97d-18f1654fcf8d',
|
||||
function_name: 'am_i_owner',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
```
|
||||
|
||||
Looking at that pesky 0, i.e. false, return value, it seems that we have no owner privileges. Let's see if the service really enforces ownership requirements and execute the init script discussed earlier. Our AIR script is pretty short and executes only the `init_service` method to handle table creation.
|
||||
|
||||
```text
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service "init_service") [] result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
Run with:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH run_air -p air-scripts/ethqlite_init.clj -d '{"service": "506528d3-3aaf-4ef5-a97d-18f1654fcf8d", "node_1": "12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH"}'fldist --node-id 12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH run_air -p air-scripts/ethqlite_init.clj -d '{"service": "506528d3-3aaf-4ef5-a97d-18f1654fcf8d", "node_1": "12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH"}'
|
||||
```
|
||||
|
||||
which yields
|
||||
|
||||
```bash
|
||||
[
|
||||
{
|
||||
"err_msg": "Not authorized to use this service",
|
||||
"success": 0
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '506528d3-3aaf-4ef5-a97d-18f1654fcf8d',
|
||||
function_name: 'init_service',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
```
|
||||
|
||||
Looks like the service wasn't joking and we need to have ownership privileges to even initiate the service. In order to prove ownership for the service, we need what's called the client seed, or just seed, in Fluence parlance, which is derived from a the private key of a user's keypair and used in the module upload and service deployment process. For our purposes, it suffices to say that the seed is `Dq3rsUZUs25FGrZM3qpiUzyKJ3NFgtqocgGRqWq9YGsx` and re-running the scripts with the seed information largely improves our lot.
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH run_air -p air-scripts/ethqlite_owner.clj -d '{"service": "506528d3-3aaf-4ef5-a97d-18f1654fcf8d", "node_1": "12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH"}' -s Dq3rsUZUs25FGrZM3qpiUzyKJ3NFgtqocgGRqWq9YGsx
|
||||
```
|
||||
|
||||
now gives us a juicy thumbs up regarding ownership:
|
||||
|
||||
```bash
|
||||
[
|
||||
1
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '506528d3-3aaf-4ef5-a97d-18f1654fcf8d',
|
||||
function_name: 'am_i_owner',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
```
|
||||
|
||||
and the db init
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH run_air -p air-scripts/ethqlite_init.clj -d '{"service": "506528d3-3aaf-4ef5-a97d-18f1654fcf8d", "node_1": "12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH"}' -s Dq3rsUZUs25FGrZM3qpiUzyKJ3NFgtqocgGRqWq9YGsx
|
||||
```
|
||||
|
||||
also executes:
|
||||
|
||||
```bash
|
||||
[
|
||||
{
|
||||
"err_msg": "",
|
||||
"success": 1
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '506528d3-3aaf-4ef5-a97d-18f1654fcf8d',
|
||||
function_name: 'init_service',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
```
|
||||
|
||||
We now have the tools and confidence that we can authenticate as the service owner and that the service actually implements some authorization schema. Specifically, we expect all state changing operations to utilize the authorization guard and the read operations to be available without authorization.
|
||||
|
@ -1,340 +0,0 @@
|
||||
# Building An Application From Multiple Services
|
||||
|
||||
In this section, we compose multiple services into an application to catalog block miner addresses and block rewards for the latest block created on the \[Ethereum\[\([https://ethereum.org/en/](https://ethereum.org/en/)\) mainnet. This block reward data is useful to track miner and pool dominance as well ETH supply and related indexes. For convenience purposes, we use the \[Etherscan API\]\([https://etherscan.io/apis](https://etherscan.io/apis)\) for this portion of the tutorial and in order to proceed, you should have an Etherscan API Key or get one from [Etherscan](https://etherscan.io/apis).
|
||||
|
||||
Since we are composing our application from first principles, we use the following services to compose our app:
|
||||
|
||||
* _get\_latest\_block_: takes an api key and returns the latest block number as a hex string
|
||||
* _hex2int_: takes a hex string and returns up to a u64
|
||||
* _get\_block_: takes an api key and block number as int and returns the reward block info
|
||||
* _extract\_miner\_address_: takes a json string and return the miner address
|
||||
|
||||
Let's find these services on the [Fluence Dashboard](https://dash.fluence.dev/) and make a note of the corresponding associated service and network ids:
|
||||
|
||||
* [Ethereum Block Getter](https://dash.fluence.dev/blueprint/801037186238469ce354d2eb6d884091aaf9622ba7b1a83816cc45d39ab2000d) provides methods to retrieve the latest, most recent Ethereum block number as a hex string and a reward block getter method, which gets the block information for a given integer block number. And yes, this service exposes two methods
|
||||
* service id: `74d5c5da-4c83-4af9-9371-2ab5d31f8019`
|
||||
* node id: `12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H`
|
||||
* [Hex Converter](https://dash.fluence.dev/blueprint/63ff63360ef64651f712a2ecf08868d1a71f9dff0af04e234e4d543a66872806), which exposes the hex\_to\_int method to convert a hex string \(starting with 0x\) to an integer value
|
||||
* service id: `285e2a5e-e505-475f-a99d-15c16c7253f9`
|
||||
* node id: `12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH`
|
||||
* [Extract Miner Address](https://dash.fluence.dev/blueprint/16a22a4033b6e98c45ac603fb520db77f4dcf42bf143f0d935262cb43136647e) extracts the miner address from a reward block json string
|
||||
* service id: `d13da294-004a-4c71-8631-a351c5f3489b`
|
||||
* node id: `12D3KooWCKCeqLPSgMnDjyFsJuWqREDtKNHx1JEBiwaMXhCLNTRb`
|
||||
|
||||
Let's test the hex conversion test in isolation with the following AIR script:
|
||||
|
||||
```text
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call hex_node (hex_service "hex_to_int") [hex_string] hex_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [hex_result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
{% hint style="info" %}
|
||||
Aquamarine Intermediate Representation \(AIR\) is a low level implementation of Aquamarine to coordinate services into applications. For a detailed introduction to AIR, please refer to the [Knowledgebase](../knowledge_knowledge/knowledge_aquamarine/hll/knowledge_aquamarine_air.md). For the remainder of this section, we will see three AIR _instructions_: _seq, xor,_ and _call._
|
||||
|
||||
_seq_ is the _sequential_ instruction that wraps arguments and executes them, you guessed it, sequentially. And yes, there is a _parallel_ instruction in the language.
|
||||
|
||||
_xor_ is the _branching_ instruction that takes two **instructions,** e.g., two _seq_ instructions as arguments and evaluates the first argument only to proceed to the second instruction if the first one failed.
|
||||
|
||||
_call_ is the _execution_ instruction to launch distributed service methods and takes the following data:
|
||||
**\(**_call_ **node-id \(service-id service-method\) \[input parameters\] result\)**
|
||||
{% endhint %}
|
||||
|
||||
As with the previous AIR script, the _xor_ takes care of capturing errors in case things don't pan out the way we've planned. Other than that, we are calling the `hex_to_int` method and we need to supply the service and node ids as well the the hex value. Save the above script to a local file called _hex2int.clj_ and use `fldist` to deploy the script:
|
||||
|
||||
```bash
|
||||
fldist run_air -p hex2int.clj -d '{"hex_service":"285e2a5e-e505-475f-a99d-15c16c7253f9", "hex_node": "12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH", "hex_string":"0xF"}'
|
||||
```
|
||||
|
||||
which rewards our efforts with:
|
||||
|
||||
```bash
|
||||
client seed: 4G1GSe9sd38wSG4Uh9JVGYXzHY4nacYXJmAdYGGoM5Xz
|
||||
client peerId: 12D3KooWC3ntvNvUP6K4XADrkkZJcJs4ZK4kkgoMWZ3ou8C4AQ12
|
||||
node peerId: 12D3KooWBUJifCTgaxAUrcM9JysqCcS4CS8tiYH5hExbdWCAoNwb
|
||||
Particle id: c0f44da7-3bfb-445b-896a-537c10143392. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
15
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '285e2a5e-e505-475f-a99d-15c16c7253f9',
|
||||
function_name: 'hex_to_int',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
We input the hex string 0xF and, as expected, got 15 radius 10 back. Whoever implemented the hex conversion service seemingly got it right. So let's keep using it as we coordinate an application from multiple services.
|
||||
|
||||
Beware but do not fear the nesting and parenthesis!! As we're building a more complex application, our script of course grows a bit. Next, we use the get\__latest\_block_ function and feed the result, a hex string, into the _hex\_to\_int c\_onversion function and feed its output, an integer, to the \_get\_block_ function to arrive at the reward block data. Of course, we wrap it all into the trusty XOR just in case something goes wrong.
|
||||
|
||||
```text
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_latest_block") [api_key] hex_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [hex_block_result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_2 (service_2 "hex_to_int") [hex_block_result] int_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [int_block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_block") [api_key int_block_result] block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
Before we deploy the script, notice that we made explicit provisions for service and node id associated with each method and we used the output, i.e., result, as input parameters for subsequent method calls. This further illustrates how Aquamarine allows developers to efficiently write applications from distributed network services.
|
||||
|
||||
Save the script locally to a file named _block\_geter_.clj and deploy it with `fldist`:
|
||||
|
||||
```bash
|
||||
fldist run_air -p block_getter.clj -d '{"service_1":"74d5c5da-4c83-4af9-9371-2ab5d31f8019", "service_2":"285e2a5e-e505-475f-a99d-15c16c7253f9", "node_1": "12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H","node_2": "12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH", "api_key":<your api key}'
|
||||
```
|
||||
|
||||
Let's see what we got:
|
||||
|
||||
```bash
|
||||
client seed: DZEJcJGCKgoZ6CPCnrZrrjdqkHu8LdnVVuPkKyfsXSjE
|
||||
client peerId: 12D3KooWNMF2yNXCjtCXCk7MnrjtXqj7Ej2jLa2kPoEBiseavvso
|
||||
node peerId: 12D3KooWBUJifCTgaxAUrcM9JysqCcS4CS8tiYH5hExbdWCAoNwb
|
||||
Particle id: 50f54bad-03f3-41ba-9950-9f18b47fbdee. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
"0xb70466"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H',
|
||||
service_id: '74d5c5da-4c83-4af9-9371-2ab5d31f8019',
|
||||
function_name: 'get_latest_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
11994214
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '285e2a5e-e505-475f-a99d-15c16c7253f9',
|
||||
function_name: 'hex_to_int',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
"{\"status\":\"1\",\"message\":\"OK\",\"result\":{\"blockNumber\":\"11994214\",\"timeStamp\":\"1615158157\",\"blockMiner\":\"0xea674fdde714fd979de3edf0f56aa9716b898ec8\",\"blockReward\":\"4345475914435035590\",\"uncles\":[],\"uncleInclusionReward\":\"0\"}}"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H',
|
||||
service_id: '74d5c5da-4c83-4af9-9371-2ab5d31f8019',
|
||||
function_name: 'get_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
Very cool. Our coordinated service flow generates the expected latest block hex string, which serves as an input to the hex conversion and the resulting integer value is used as an input in the get\_block method, which returns the associated reward block information. Just as planned. Beautiful.
|
||||
|
||||
Of course, that leaves us wanting as our goal was to get the reward miner address. Not to worry, we incorporate the missing _extract\_miner\_address_ service call into our AIR script:
|
||||
|
||||
```text
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_latest_block") [api_key] hex_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [hex_block_result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_2 (service_2 "hex_to_int") [hex_block_result] int_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [int_block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_block") [api_key int_block_result] block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_3 (service_3 "extract_miner_address") [block_result] reward_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [reward_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
Update your _block\_getter.clj_ file with the updated AIR script and fire up our trusty `fldist`:
|
||||
|
||||
```bash
|
||||
fldist run_air -p block_getter.clj -d '{"service_1":"74d5c5da-4c83-4af9-9371-2ab5d31f8019", "service_2":"285e2a5e-e505-475f-a99d-15c16c7253f9","service_3":"d13da294-004a-4c71-8631-a351c5f3489b" ,"node_1": "12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H","node_2": "12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH", "node_3":"12D3KooWCKCeqLPSgMnDjyFsJuWqREDtKNHx1JEBiwaMXhCLNTRb", "api_key":<your api key>}'
|
||||
```
|
||||
|
||||
and let's admire our handiwork:
|
||||
|
||||
```bash
|
||||
client seed: Az2SSwrLxEgbRZFdTR7ezcFgVu8D2YzND3ceLcBbReZ1
|
||||
client peerId: 12D3KooWK3uZKW9kk1p6zt4NHNx9HB16T17kBjxkt8JB7vAFmvUC
|
||||
node peerId: 12D3KooWBUJifCTgaxAUrcM9JysqCcS4CS8tiYH5hExbdWCAoNwb
|
||||
Particle id: cdf7d2f0-3e99-4a9e-bb17-28b3e0326c97. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
"0xb70519"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H',
|
||||
service_id: '74d5c5da-4c83-4af9-9371-2ab5d31f8019',
|
||||
function_name: 'get_latest_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
11994393
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH',
|
||||
service_id: '285e2a5e-e505-475f-a99d-15c16c7253f9',
|
||||
function_name: 'hex_to_int',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
"{\"status\":\"1\",\"message\":\"OK\",\"result\":{\"blockNumber\":\"11994393\",\"timeStamp\":\"1615160413\",\"blockMiner\":\"0x2f731c3e8cd264371ffdb635d07c14a6303df52a\",\"blockReward\":\"3527180289300104721\",\"uncles\":[],\"uncleInclusionReward\":\"0\"}}"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H',
|
||||
service_id: '74d5c5da-4c83-4af9-9371-2ab5d31f8019',
|
||||
function_name: 'get_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
"\"0x2f731c3e8cd264371ffdb635d07c14a6303df52a\""
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWCKCeqLPSgMnDjyFsJuWqREDtKNHx1JEBiwaMXhCLNTRb',
|
||||
service_id: 'd13da294-004a-4c71-8631-a351c5f3489b',
|
||||
function_name: 'extract_miner_address',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
As expected, we now also have the miner address for the latest block mined.\[^1\]
|
||||
|
||||
In summary, we set out to obtain the miner address of the most recent, aka latest, block created on Ethereum and identified a number of suitable, reusable services already available on different peers of the Fluence testnet to achieve our goal.
|
||||
|
||||
We again utilized Aquamarine and wrote an increasingly comprehensive AIR script to coordinate and compose these services into our application and successfully obtained the miner addresses for the latest Ethereum block.
|
||||
|
||||
AIR, once again, proved to be a powerful and efficient tool and illustrates the power of the p2p reusable services model and the ease to change and expand an application's or backend's feature and capabilities set by means of coordination.
|
||||
|
||||
In the next section, we add state to persist our results for future use by composing Sqlite into our application workflow.
|
||||
|
||||
\[^1\]: If you get an error instead of the well-formed result, it's most likely due to a null return from _get\_block_, which can happen if the block hasn't finalized, has been dropped, etc. at the time of the call.
|
||||
|
@ -1,41 +0,0 @@
|
||||
# Setup
|
||||
|
||||
Fluence provides developers with nodes, runtimes and tools to ease and accelerate the development of distributed networks, backends, and applications. In order to be able to utilize the Fluence support system, we need to install a few things on our machines.
|
||||
|
||||
If you don't have [Rust](https://www.rust-lang.org/) installed, this is as good a time as any to do so:
|
||||
|
||||
```bash
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
```
|
||||
|
||||
and follow the [instructions](https://www.rust-lang.org/tools/install). Once Rust is in place, install Rust _nightly_ and the Wasm tool chain:
|
||||
|
||||
```bash
|
||||
rustup toolchain install nightly
|
||||
rustup target add wasm32-wasi
|
||||
```
|
||||
|
||||
In addition, install the Marine REPL,`mrepl`, and CLI, `marine`, tools:
|
||||
|
||||
```bash
|
||||
$ cargo install marine --force
|
||||
$ cargo +nightly install mrepl --force
|
||||
```
|
||||
|
||||
Finally, you need [node](https://nodejs.org/en/) installed, and if you don't have it already, you may be best served by installing [NVM](https://github.com/nvm-sh/nvm):
|
||||
|
||||
```bash
|
||||
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
|
||||
nvm install --lts
|
||||
```
|
||||
|
||||
which allows us to install the [Fluence Service Distribution and Management tool](https://github.com/fluencelabs/proto-distributor), `fldist`.
|
||||
|
||||
```bash
|
||||
npm install -g @fluencelabs/fldist
|
||||
```
|
||||
|
||||
Throughout the tutorial you will be asked to copy AIR scripts and run them with the `fldist` tool. In order to use this tool, you need to use the terminal. The repeat action is to copy an AIR script from this book to a file on your system and then run the file with `fldist` in the terminal.
|
||||
|
||||
That's all we need to work through the examples. Let's get to it.
|
||||
|
@ -1,10 +0,0 @@
|
||||
# What's Next
|
||||
|
||||
We hope the Quick Start has given you a feel for Aquamarine and how to harness its power to quickly and effectively compose distributed services and applications. For a more in-depth look at Aquamarine and the Fluence stack, have a look at:
|
||||
|
||||
* how to build modules and services and deploy them to a network, the [Developing Modules And Services](quick_start_summary.md) chapters
|
||||
* the core [concepts](../knowledge_knowledge/knowledge_concepts.md) underlying Aquamarine and the Fluence stack
|
||||
* detailed examples, head over to the [Tutorials](quick_start_summary.md) section.
|
||||
|
||||
Please don't hesitate to reach out with questions, comments or contributions at any of our channels.
|
||||
|
@ -1,88 +0,0 @@
|
||||
# Using a Service
|
||||
|
||||
Let's dive right into peer-to-peer awesomeness by harnessing a distributed curl service, which pretty much keeps with its namesake: pass it a url and collect the response. Instead of developing our service from scratch, we reuse one already deployed to the \[Fluence testnet\]\([https://dash.fluence.dev/nodes](https://dash.fluence.dev/nodes)\).
|
||||
|
||||
The [Fluence Dashboard](https://dash.fluence.dev/) facilitates the discovery of available services, such as the [Curl Adapter](https://dash.fluence.dev/blueprint/b7d2454e-2a75-408c-a23a-fe35de3beeb9) service, which allows us to harness http\(s\) requests as a service. Drilling down on the metadata provides a few useful parameters such as _service id_, _node id_ and _ip address_, which we need to execute our distributed curl service.
|
||||
|
||||
In order to execute the cUrl service and collect the result, i.e., response, we call upon our composition and coordination medium Aquamarine via an Aquamarine Intermediate Representation \(AIR\) script.
|
||||
|
||||
```scheme
|
||||
;; handle possible errors via xor
|
||||
(xor
|
||||
(seq
|
||||
;; call function 'service_id.request' on node 'relay'
|
||||
(call relay (service_id "request") [url] result)
|
||||
|
||||
;; return result back to the client
|
||||
(call %init_peer_id% (returnService "run") [result])
|
||||
)
|
||||
;; if error, return it to the client
|
||||
(call %init_peer_id% (returnService "run") [%last_error%])
|
||||
)
|
||||
```
|
||||
|
||||
Without going too deep into Aquamarine and AIR, this script specifies that we call a public peer-to-peer relay \(node\) on the network, ask to run the \(curl\) _request_ function with data parameter _url_ and _service\_id_ parameter, and collect the _result_ **xor** the _error message_ in case of execution failure. We also promise to pass the _service\_id_ and _url_ parameters to the scripts.
|
||||
|
||||
The "magic" happens by handing the script to the `fldist` CLI tool, which then sends the script for execution to the specified p2p network and locally shadows the execution. Please note that Instead of developing full-fledged frontend applications, we use the `fldist` CLI tool. However, a [JS SDK](https://github.com/fluencelabs/fluence-js) is available to accelerate the development of more complex frontend applications.
|
||||
|
||||
{% hint style="info" %}
|
||||
Throughout the document, we utilize service and node ids, which in most cases may be different for you.
|
||||
{% endhint %}
|
||||
|
||||
With the service id parameter obtained from the dashboard lookup above, e.g., "f92ce98b-1ed6-4ce3-9864-11f4e93a478f", and some Fluence goodness at both the local and remote levels enables us to:
|
||||
|
||||
1. find the p2p node hosting the curl service with above service id ,
|
||||
2. execute the service and
|
||||
3. collect the response
|
||||
|
||||
In your directory of choice, save the above script as _curl\_request.clj_ and run:
|
||||
|
||||
```bash
|
||||
$ fldist run_air -p curl_request.clj -d '{"service_id": "f92ce98b-1ed6-4ce3-9864-11f4e93a478f", "url":"https://api.duckduckgo.com/?q=homotopy&format=json"}'
|
||||
```
|
||||
|
||||
and voila, book-ended by process and network metadata, we see our result in the stdout pipe ready for further processing.
|
||||
|
||||
```bash
|
||||
client seed: 3qrmrkDUkXRu5tNawBz2wgvhaS6pgUSRVUdcG6hZgzjx
|
||||
client peerId: 12D3KooWLu13eNDTfvBeH92a7o4nzLGgjrbVnot85etMKPfzoQyK
|
||||
node peerId: 12D3KooWBUJifCTgaxAUrcM9JysqCcS4CS8tiYH5hExbdWCAoNwb
|
||||
Particle id: 005a2c5a-3cde-48ed-92eb-3ce814281ba0. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
{
|
||||
"error": "",
|
||||
"ret_code": 0,
|
||||
"stderr": "",
|
||||
"stdout": "{\"Abstract\":\"In topology, a branch of mathematics, two continuous functions from one topological space to another are called homotopic if one can be \\\"continuously deformed\\\" into the other, such a deformation being called a homotopy between the two functions. A notable use of homotopy is the definition of homotopy groups and cohomotopy groups, important invariants in algebraic topology. In practice, there are technical difficulties in using homotopies with certain spaces. Algebraic topologists work with compactly generated spaces, CW
|
||||
<snip>
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWBUJifCTgaxAUrcM9JysqCcS4CS8tiYH5hExbdWCAoNwb',
|
||||
service_id: '56f1a984-b719-45e9-baa8-f99ed0a9810b',
|
||||
function_name: 'request',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
^C
|
||||
```
|
||||
|
||||
Feel free to experiment with the AIR script not only with different url's but also different curl services already available from the [Fluence Dashboard](https://dash.fluence.dev/); maybe even force an error or two.
|
||||
|
||||
To recap, we:
|
||||
|
||||
* discovered a cUrl service on the Fluence Dashboard,
|
||||
* wrote a short AIR script to coordinate the service into an application
|
||||
* submitted the script with the Fluence distributor tool `fldist` to the Fluence p2p testnet
|
||||
* executed the \(remote\) curl service request, and
|
||||
* collected the result
|
||||
|
||||
With essentially a two line script and a couple of parameters we executed a search request as a service on a peer-to-peer network. Even this small example should impress the ease afforded by Aquamarine to compose applications from portable, reusable and distributed services not only taken serverless to the next level by greatly reducing devops requirements but also empowering developers with a composition and coordination medium second to none.
|
||||
|
||||
In the next section, we build an Ethereum block getter application by coordinating multiple services into an application.
|
||||
|
@ -1,2 +0,0 @@
|
||||
# Recipes
|
||||
|
@ -1,4 +0,0 @@
|
||||
# Data Replication
|
||||
|
||||
Coming soon. If you really need this section, contact us through any of the social media channels or Github.
|
||||
|
@ -1,4 +0,0 @@
|
||||
# Error Management and Testing Of Services
|
||||
|
||||
Coming soon. If you really need this section, contact us through any of the social media channels or Github.
|
||||
|
@ -1,4 +0,0 @@
|
||||
# IPFS
|
||||
|
||||
Coming soon. If you really need this section, contact us through any of the social media channels or Github.
|
||||
|
@ -1,4 +0,0 @@
|
||||
# Local Filesystem
|
||||
|
||||
Coming soon. If you really need this section, contact us through any of the social media channels or Github.
|
||||
|
@ -1,105 +0,0 @@
|
||||
# cUrl as a Service
|
||||
|
||||
## Overview
|
||||
|
||||
[Curl](https://curl.se/) is a widely available and used command-line tool to receive or send data using URL syntax. Chances are, you probably just used it when you set up your Fluence development environment. For Fluence services to be able to interact with the world, cUrl is one option to facilitate https calls. Since Fluence modules are Wasm IT modules, cUrl cannot not be a service intrinsic. Instead, the curl command-line tool needs to be made available and accessible at the node level. And for Fluence services to be able to interact with Curl, we need to code a cUrl adapter taking care of the mounted \(cUrl\) binary.
|
||||
|
||||
## Adapter Construction
|
||||
|
||||
The work for the cUrl adapter has been fundamentally done and is exposed by the Fluence Rust SDK. As a developer, the task remaining is to instantiate the adapter in the context of the module and services scope. The following code [snippet](https://github.com/fluencelabs/fce/tree/master/examples/url-downloader/curl_adapter) illustrates the implementation requirement.
|
||||
|
||||
```rust
|
||||
use fluence::fce;
|
||||
|
||||
use fluence::WasmLoggerBuilder;
|
||||
use fluence::MountedBinaryResult;
|
||||
|
||||
pub fn main() {
|
||||
WasmLoggerBuilder::new().build().unwrap();
|
||||
}
|
||||
|
||||
#[fce]
|
||||
pub fn download(url: String) -> String {
|
||||
log::info!("get called with url {}", url);
|
||||
|
||||
let result = unsafe { curl(vec![url]) };
|
||||
String::from_utf8(result.stdout).unwrap()
|
||||
}
|
||||
|
||||
#[fce]
|
||||
#[link(wasm_import_module = "host")]
|
||||
extern "C" {
|
||||
fn curl(cmd: Vec<String>) -> MountedBinaryResult;
|
||||
}
|
||||
```
|
||||
|
||||
with the following dependencies necessary in the Cargo.toml:
|
||||
|
||||
```rust
|
||||
fluence = { version = "=0.4.2", features = ["logger"] }
|
||||
log = "0.4.8"
|
||||
```
|
||||
|
||||
We are basically linking the [external](https://doc.rust-lang.org/std/keyword.extern.html) cUrl binary and are exposing access to it as a FCE interface called download.
|
||||
|
||||
### Code References
|
||||
|
||||
* [Mounted binaries](https://github.com/fluencelabs/fce/blob/c559f3f2266b924398c203a45863ebf2fb9252ec/fluence-faas/src/host_imports/mounted_binaries.rs)
|
||||
* [cUrl](https://github.com/curl/curl)
|
||||
|
||||
### Service Construction
|
||||
|
||||
In order to create a valid Fluence service, a service configuration is required.
|
||||
|
||||
```text
|
||||
modules_dir = "target/wasm32-wasi/release"
|
||||
|
||||
[[module]]
|
||||
name = "curl_adapter"
|
||||
logger_enabled = true
|
||||
|
||||
[mounted.mounted_binaries]
|
||||
curl = "/usr/bin/curl"
|
||||
```
|
||||
|
||||
We are specifying the location of the Wsasm file, the import name of the Wasm file, some logging housekeeping, and the mounted binary reference with the command-line call information.
|
||||
|
||||
### Service Creation
|
||||
|
||||
```bash
|
||||
cargo new curl-service
|
||||
cd curl-service
|
||||
# copy the above rust code into src/main
|
||||
# copy the specified dependencies into Cargo.toml
|
||||
# copy the above service configuration into Config.toml
|
||||
|
||||
fce build --release
|
||||
```
|
||||
|
||||
You should have the Fluence module curl\_adapter.wasm in `target/wasm32-wasi/release` we can stest our service with `fce-repl`.
|
||||
|
||||
### Service Test
|
||||
|
||||
Running the REPL, we use the `interface` command to list all available interfaces and the `call` command to run a method. Fr our purposes, we furnish the [https://duckduckgo.com/?q=Fluence+Labs](https://duckduckgo.com/?q=Fluence+Labs) url to give the the curl adapter a workout.
|
||||
|
||||
```bash
|
||||
fce-repl Config.toml
|
||||
Welcome to the FCE REPL (version 0.5.2)
|
||||
app service was created with service id = 8ad81c3a-8c5c-4730-80d1-c54cd177725d
|
||||
elapsed time 40.312376ms
|
||||
|
||||
1> interface
|
||||
Loaded modules interface:
|
||||
|
||||
curl_adapter:
|
||||
fn download(url: String) -> String
|
||||
|
||||
2> call curl_adapter download ["https://duckduckgo.com/?q=Fluence+Labs"]
|
||||
result: String("<!DOCTYPE html><html lang=\"en-US\" class=\"no-js has-zcm no-theme\"><head><meta name=\"description\" content=\"DuckDuckGo. Privacy, Simplified.\"><meta http-equiv=\"content-type\" content=\"text/html; charset=utf-8\"><title>Fluence Labs at DuckDuckGo</title><link rel=\"stylesheet\" href=\"/s1963.css\" type=\"text/css\"><link rel=\"stylesheet\" href=\"/r1963.css\" type=\"text/css\"><meta name=\"robots\" content=\"noindex,nofollow\"><meta name=\"referrer\" content=\"origin\"><meta name=\"apple-mobile-web-app-title\" content=\"Fluence Labs\"><link rel=\"preconnect\" href=\"https://links.duckduckgo.com\"><link rel=\"preload\" href=\"/font/ProximaNova-Reg-webfont.woff2\" as=\"font\" type=\"font/woff2\" crossorigin=\"anonymous\" /><link rel=\"preload\" href=\"/font/ProximaNova-Sbold-webfont.woff2\" as=\"font\" type=\"font/woff2\" crossorigin=\"anonymous\" /><link rel=\"shortcut icon\" href=\"/favicon.ico\" type=\"image/x-icon\" /><link id=\"icon60\" rel=\"apple-touch-icon\" href=\"/assets/icons/meta/DDG-iOS-icon_60x60.png?v=2\"/><link id=\"icon76\" rel=\"apple-touch-icon\" sizes=\"76x76\" href=\"/assets/icons/meta/DDG-iOS-icon_76x76.png?v=2\"/><link id=\"icon120\" rel=\"apple-touch-icon\" sizes=\"120x120\" href=\"/assets/icons/meta/DDG-iOS-icon_120x120.png?v=2\"/><link id=\"icon152\" rel=\"apple-touch-icon\"s
|
||||
<snip>
|
||||
ript\">DDG.index = DDG.index || {}; DDG.index.signalSummary = \"\";</script>")
|
||||
elapsed time: 334.545388ms
|
||||
|
||||
3>
|
||||
```
|
||||
|
@ -1,51 +0,0 @@
|
||||
# Setting Up Your Environment
|
||||
|
||||
In order to develop within the Fluence solution, [Rust](https://www.rust-lang.org/tools/install) and small number of tools are required.
|
||||
|
||||
## Rust
|
||||
|
||||
Install Rust:
|
||||
|
||||
```bash
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
```
|
||||
|
||||
once Rust is installed, we need to expand the toolchain and include [nightly build](https://rust-lang.github.io/rustup/concepts/channels.html) and the [Wasm](https://doc.rust-lang.org/stable/nightly-rustc/rustc_target/spec/wasm32_wasi/index.html) compile target.
|
||||
|
||||
```bash
|
||||
rustup install nightly
|
||||
rustup target add wasm32-wasi
|
||||
```
|
||||
|
||||
To keep Rust and the toolchains updated:
|
||||
|
||||
```bash
|
||||
rustup self update
|
||||
rustup update
|
||||
```
|
||||
|
||||
There are a number of good Rust installation and IDE integration tutorials available. [DuckDuckGo](https://duckduckgo.com/) is your friend but if that's too much effort, have a look at [koderhq](https://www.koderhq.com/tutorial/rust/environment-setup/).
|
||||
|
||||
## Fluence Tools
|
||||
|
||||
Fluence provides several tools to support developers. Fluence cli, `flcli`, facilitates the compilation of modules to the necessary wasm32-wasi target. Fluence REPL, `fce-repl`, on the other hand, is a cli tool to test and experiment with FCE modules and services locally.
|
||||
|
||||
```bash
|
||||
cargo install fcli
|
||||
cargo +nightly install frepl
|
||||
```
|
||||
|
||||
In addition, Fluence provides the [proto-distributor](https://github.com/fluencelabs/proto-distributor) tool, aka `fldist`, for service lifecyle management. From deploying services to the network to executing AIR scripts, `fldist` does it all.
|
||||
|
||||
```bash
|
||||
npm install -g @fluencelabs/fldist
|
||||
```
|
||||
|
||||
## Fluence SDK
|
||||
|
||||
For frontend development, the Fluence [JS-SDK](https://github.com/fluencelabs/fluence-js) is currently the favored, and only, tool.
|
||||
|
||||
```bash
|
||||
npm install @fluencelabs/fluence
|
||||
```
|
||||
|
@ -1,4 +0,0 @@
|
||||
# Redis
|
||||
|
||||
Coming soon. If you really need this section, contact us through any of the social media channels or Github.
|
||||
|
@ -1,4 +0,0 @@
|
||||
# Sqlite
|
||||
|
||||
Coming soon. If you really need this section, contact us through any of the social media channels or Github.
|
||||
|
@ -1,2 +0,0 @@
|
||||
# Untitled
|
||||
|
@ -1,5 +0,0 @@
|
||||
# Research, Papers And References
|
||||
|
||||
* [Fluence Manifesto](https://fluence.network/manifesto.html)
|
||||
* [Fluence Protocol](https://github.com/fluencelabs/rfcs/blob/main/0-overview.md)
|
||||
|
@ -1,2 +0,0 @@
|
||||
# Tutorials
|
||||
|
@ -1,137 +0,0 @@
|
||||
# Add Your Own Builtins
|
||||
|
||||
As discussed in the [Node]() section, some service functionalities have ubiquitous demand making them suitable candidates to be directly deployed to a peer node. The [Aqua distributed hash table](https://github.com/fluencelabs/fluence/tree/master/deploy/builtins/aqua-dht) \(DHT\) is an example of builtin service. The remainder of this tutorial guides you through the steps necessary to create and deploy a Builtin service.
|
||||
|
||||
In order to have a service available out-of-the-box with the necessary startup and scheduling scripts, we can take advantage of the Fluence [deployer feature](https://github.com/fluencelabs/fluence/tree/master/deploy) for Node native services. This feature handles the complete deployment process including
|
||||
|
||||
* module uploads,
|
||||
* service deployment and
|
||||
* script initialization and scheduling
|
||||
|
||||
Note that the deployment process is a fully automated workflow requiring you to merely submit your service assets, i.e., Wasm modules and configuration scripts, in the appropriate format as a PR to the [Fluence](https://github.com/fluencelabs/fluence) repository.
|
||||
|
||||
At this point you should have a solid grasp of creating service modules and their associated configuration files. See the [Developing Modules And Services]() section for more details.
|
||||
|
||||
Our first step is fork the [Fluence](https://github.com/fluencelabs/fluence) repo by clicking on the Fork button, upper right of the repo webpage, and follow the instructions to create a local copy. In your local repo copy, checkout a new branch with a new, unique branch name:
|
||||
|
||||
```text
|
||||
cd fluence
|
||||
git checkout -b MyBranchName
|
||||
```
|
||||
|
||||
In our new branch, we create a directory with the service name in the _deploy/builtin_ directory:
|
||||
|
||||
```text
|
||||
cd deploy/builtins
|
||||
mkdir my-new-super-service
|
||||
cd new-super-service
|
||||
```
|
||||
|
||||
Replace _my_-_new-super-service_ with your service name.
|
||||
|
||||
Now we can build and populate the required directory structure with your service assets. You should put your service files in the corresponding _my_-_new-super-service_ directory.
|
||||
|
||||
## Requirements
|
||||
|
||||
In order to deploy a builtin service, we need
|
||||
|
||||
* the Wasm file for each module required for the service
|
||||
* the blueprint file for the service
|
||||
* the optional start and scheduling scripts
|
||||
|
||||
### Blueprint
|
||||
|
||||
Blueprints capture the service name and dependencies:
|
||||
|
||||
```javascript
|
||||
// example_blueprint.json
|
||||
{
|
||||
"name": "my-new-super-service",
|
||||
"dependencies": [
|
||||
"name:my_module_1",
|
||||
"name:my_module_2",
|
||||
"hash:Hash(my_module_3.wasm)"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
where
|
||||
|
||||
* name specifies the service's name and
|
||||
* dependencies list the names of the Wasm modules or the Blake3 hash of the Wasm module
|
||||
|
||||
In the above example, _my\_module\_i_ refers to ith module created when you compiled your service code
|
||||
|
||||
{% hint style="info" %}
|
||||
The easiest way to get the Blake3 hash of our Wasm modules is to install the [b3sum](https://crates.io/crates/blake3) utility:
|
||||
|
||||
```text
|
||||
cargo install b3sum
|
||||
b3sum my_module_3.wasm
|
||||
```
|
||||
{% endhint %}
|
||||
|
||||
If you decide to use the hash approach, please use the hash for the config files names as well \(see below\).
|
||||
|
||||
### **Start Script**
|
||||
|
||||
Start scripts, which are optional, execute once after service deployment or node restarts and are submitted as _air_ files and may be accompanied by a _json_ file containing the necessary parameters.
|
||||
|
||||
```text
|
||||
;; on_start.air
|
||||
(seq
|
||||
(call relay ("some_service_alias" "some_func1") [variable1] result)
|
||||
(call relay ("some_service_alias" "some_func2") [variable2 result])
|
||||
)
|
||||
```
|
||||
|
||||
and the associated data file:
|
||||
|
||||
```javascript
|
||||
// on_start.json data for on_start.air
|
||||
{
|
||||
"variable1" : "some_string",
|
||||
"variable2" : 5,
|
||||
}
|
||||
```
|
||||
|
||||
### **Scheduling Script**
|
||||
|
||||
Scheduling scripts allow us to decouple service execution from the client and instead can rely on a cron-like scheduler running on a node to trigger our service\(s\). For a brief overview, see [additional concepts]()
|
||||
|
||||
### Directory Structure
|
||||
|
||||
Now that we got our requirements covered, we can populate the directory structure we started to lay out at the beginning of this section. As mentioned above, service deployment as a builtin is an automated workflow one our PR is accepted. Hence, it is imperative to adhere to the directory structure below:
|
||||
|
||||
```text
|
||||
-- builtins
|
||||
-- {service_alias}
|
||||
-- scheduled
|
||||
-- {script_name}_{interval_in_seconds}.air [optional]
|
||||
-- blueprint.json
|
||||
-- on_start.air [optional]
|
||||
-- on_start.json [optional]
|
||||
-- {module1_name}.wasm
|
||||
-- {module1_name}_config.json
|
||||
-- Hash(module2_name.wasm).wasm
|
||||
-- Hash(module2_name.wasm)_config.json
|
||||
...
|
||||
```
|
||||
|
||||
For a complete example, please see the [aqua-dht](https://github.com/fluencelabs/fluence/tree/master/deploy/builtins/aqua-dht) builtin:
|
||||
|
||||
```text
|
||||
fluence
|
||||
--deploy
|
||||
--builtins
|
||||
--aqua-dht
|
||||
-aqua-dht.wasm
|
||||
-aqua-dht_config.json
|
||||
-blueprint.json
|
||||
-scheduled
|
||||
-sqlite3.wasm # or 558a483b1c141b66765947cf6a674abe5af2bb5b86244dfca41e5f5eb2a86e9e.wasm
|
||||
-sqlite3_config.json # or 558a483b1c141b66765947cf6a674abe5af2bb5b86244dfca41e5f5eb2a86e9e_config.json
|
||||
```
|
||||
|
||||
which is based on the [eponymous](https://github.com/fluencelabs/aqua-dht) service project.
|
||||
|
@ -1,4 +0,0 @@
|
||||
# Building a Chat Appplication
|
||||
|
||||
Coming soon. If you really need this section, contact us through any of the social media channels or Github.
|
||||
|
@ -1,4 +0,0 @@
|
||||
# Building a Collaborative Editor
|
||||
|
||||
Coming soon. If you really need this section, contact us through any of the social media channels or Github.
|
||||
|
@ -1,238 +0,0 @@
|
||||
# Building a Frontend with JS-SDK
|
||||
|
||||
Fluence provides means to connect to the network from a javascript environment. It is currently tested to work in nodejs and modern browsers.
|
||||
|
||||
To create an application you will need two main building blocks: the JS SDK itself and the aqua compiler. Both of them are provided in a form npm packages. JS SDK wraps the air interpreter and provides a connection to the network. There is low-level api for executing air scripts and registering for service calls handlers. Aqua compiler allows to write code in aqua language and compile it into typescript code which can be directly used with the SDK.
|
||||
|
||||
Even though all the logic could be programmed by hand with raw air it is strongly recommended to use aqua.
|
||||
|
||||
### Basic usage
|
||||
|
||||
As previously said you can use Fluence with any frontend or nodejs framework. JS SDK could be as any other npm library. For the purpose of the demo we will init a bare-bones nodejs package and show you the steps needed install JS SDK and aqua compiler. Feel free to use the tool most suitable for the framework used in application, the installation process should be roughly the same
|
||||
|
||||
#### 1. Start with npm package
|
||||
|
||||
For the purpose of the demo we will use a very minimal npm package with typescript support:
|
||||
|
||||
```text
|
||||
src
|
||||
┗ index.ts (1)
|
||||
package.json (2)
|
||||
tsconfig.json
|
||||
```
|
||||
|
||||
index.ts \(1\):
|
||||
|
||||
```typescript
|
||||
async function main() {
|
||||
console.log("Hello, world!");
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
|
||||
package.json \(2\):
|
||||
|
||||
```javascript
|
||||
{
|
||||
"name": "demo",
|
||||
"version": "1.0.0",
|
||||
"description": "",
|
||||
"main": "index.js",
|
||||
"scripts": {
|
||||
"exec": "node -r ts-node/register src/index.ts"
|
||||
},
|
||||
"author": "",
|
||||
"license": "ISC",
|
||||
"devDependencies": {
|
||||
"ts-node": "^9.1.1",
|
||||
"typescript": "^4.2.4"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Let's test if it works:
|
||||
|
||||
```bash
|
||||
$ npm install
|
||||
$ npm run exec
|
||||
```
|
||||
|
||||
The following should be printed
|
||||
|
||||
```bash
|
||||
$ npm run exec
|
||||
|
||||
> demo@1.0.0 exec C:\work\demo
|
||||
> node -r ts-node/register src/index.ts
|
||||
|
||||
Hello, world!
|
||||
$ C:\work\demo>
|
||||
```
|
||||
|
||||
#### 2. Setting JS SDK and connecting to Fluence network
|
||||
|
||||
Install the dependencies, you will need these two packages.
|
||||
|
||||
```bash
|
||||
npm install @fluencelabs/fluence @fluencelabs/fluence-network-environment
|
||||
```
|
||||
|
||||
The first one is the SDK itself and the second is a maintained list of Fluence networks and nodes to connect to.
|
||||
|
||||
All of the communication with the Fluence network is done by using `FluenceClient`. You can create one with `createClient` function. The client encapsulates air interpreter and connects to the network through the relay. Currently any node in the network can be used a relay.
|
||||
|
||||
```typescript
|
||||
import { createClient } from "@fluencelabs/fluence";
|
||||
import { testNet } from "@fluencelabs/fluence-network-environment";
|
||||
|
||||
async function main() {
|
||||
const client = await createClient(testNet[1]);
|
||||
console.log("Is client connected: ", client.isConnected);
|
||||
|
||||
await client.disconnect();
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
|
||||
Let's try it out:
|
||||
|
||||
```bash
|
||||
$ npm run exec
|
||||
|
||||
> demo@1.0.0 exec C:\work\demo
|
||||
> node -r ts-node/register src/index.ts
|
||||
|
||||
Is client connected: true
|
||||
$
|
||||
```
|
||||
|
||||
**Note**: typically you should have a single instance`FluenceClient` per application since it represents it's identity in the network. You are free to store the instance anywhere you like.
|
||||
|
||||
#### 3. Setting Up The Aqua Compiler
|
||||
|
||||
Aqua is the proffered language for the Fluence network. It can be used with javascript-based environments via npm package.
|
||||
|
||||
```bash
|
||||
npm install --save-dev @fluencelabs/aqua
|
||||
```
|
||||
|
||||
We will also need the standard library for the language
|
||||
|
||||
```text
|
||||
npm install --save-dev @fluencelabs/aqua-lib
|
||||
```
|
||||
|
||||
Let's add our first aqua file:
|
||||
|
||||
```text
|
||||
aqua (1)
|
||||
┗ demo.aqua (2)
|
||||
node_modules
|
||||
src
|
||||
┣ compiled (3)
|
||||
┗ index.ts
|
||||
package-lock.json
|
||||
package.json
|
||||
tsconfig.json
|
||||
```
|
||||
|
||||
Add two directories, one for aqua sources \(1\) and another for the typescript output \(3\)
|
||||
|
||||
Create a new text file called `demo.aqua` \(2\):
|
||||
|
||||
```text
|
||||
import "@fluencelabs/aqua-lib/builtin.aqua"
|
||||
|
||||
func demo(nodePeerId: PeerId) -> []string:
|
||||
on nodePeerId:
|
||||
res <- Peer.identify()
|
||||
<- res.external_addresses
|
||||
```
|
||||
|
||||
This script will gather the list of external addresses from some node in the network. For more information about the aqua language refer to the aqua documentation.
|
||||
|
||||
The aqua code can now be compiled by using the compiler CLI. We suggest adding a script to the package.json file like so:
|
||||
|
||||
```javascript
|
||||
...
|
||||
"scripts": {
|
||||
"exec": "node -r ts-node/register src/index.ts",
|
||||
"compile-aqua": "aqua -i ./aqua/ -o ./src/compiled"
|
||||
},
|
||||
...
|
||||
```
|
||||
|
||||
Run the compiler:
|
||||
|
||||
```bash
|
||||
npm run compile-aqua
|
||||
```
|
||||
|
||||
A typescript file should be generated like so:
|
||||
|
||||
```text
|
||||
aqua
|
||||
┗ demo.aqua
|
||||
node_modules
|
||||
src
|
||||
┣ compiled
|
||||
┃ ┗ demo.ts <--
|
||||
┗ index.ts
|
||||
package-lock.json
|
||||
package.json
|
||||
tsconfig.json
|
||||
```
|
||||
|
||||
#### 4. Consuming The Compiled Code
|
||||
|
||||
Using the code generated by the compiler is as easy as calling a function. The compiler generates all the boilerplate needed to send a particle into the network and wraps it into a single call. Note that all the type information and therefore type checking and code completion facilities are there!
|
||||
|
||||
Let's do it!
|
||||
|
||||
```typescript
|
||||
import { createClient } from "@fluencelabs/fluence";
|
||||
import { testNet } from "@fluencelabs/fluence-network-environment";
|
||||
|
||||
import { demo } from "./compiled/demo"; // import the generated file
|
||||
|
||||
async function main() {
|
||||
const client = await createClient(testNet[1]);
|
||||
console.log("Is client connected: ", client.isConnected);
|
||||
|
||||
const otherNode = testNet[2].peerId;
|
||||
const addresses = await demo(client, otherNode); // call it like a normal function in typescript
|
||||
console.log(`Node ${otherNode} is connected to: ${addresses}`);
|
||||
|
||||
await client.disconnect();
|
||||
}
|
||||
|
||||
main();
|
||||
```
|
||||
|
||||
if everything is fine you have similar result:
|
||||
|
||||
```text
|
||||
$ npm run exec
|
||||
|
||||
> demo@1.0.0 exec C:\work\demo
|
||||
> node -r ts-node/register src/index.ts
|
||||
|
||||
Is client connected: true
|
||||
Node 12D3KooWHk9BjDQBUqnavciRPhAYFvqKBe4ZiPPvde7vDaqgn5er is connected to: /ip4/138.197.189.50/tcp/7001,/ip4/138.197.189.50/tcp/9001/ws
|
||||
|
||||
$
|
||||
```
|
||||
|
||||
### Advanced usage
|
||||
|
||||
Fluence JS SDK gives options to register own handlers for aqua vm service calls
|
||||
|
||||
**TBD**
|
||||
|
||||
### References
|
||||
|
||||
* For the list of compiler options see: [https://github.com/fluencelabs/aqua](https://github.com/fluencelabs/aqua)
|
||||
* Repository with additional examples: [**https://github.com/fluencelabs/aqua-playground**](https://github.com/fluencelabs/aqua-playground)\*\*\*\*
|
||||
|
@ -1,105 +0,0 @@
|
||||
# cUrl As A Service
|
||||
|
||||
### Overview
|
||||
|
||||
[Curl](https://curl.se/) is a widely available and used command-line tool to receive or send data using URL syntax. Chances are, you probably just used it when you set up your Fluence development environment. For Fluence services to be able to interact with the world, cUrl is one option to facilitate https calls. Since Fluence modules are Wasm IT modules, cUrl cannot not be a service intrinsic. Instead, the curl command-line tool needs to be made available and accessible at the node level. And for Fluence services to be able to interact with Curl, we need to code a cUrl adapter taking care of the mounted \(cUrl\) binary.
|
||||
|
||||
### Adapter Construction
|
||||
|
||||
The work for the cUrl adapter has been fundamentally done and is exposed by the Fluence Rust SDK. As a developer, the task remaining is to instantiate the adapter in the context of the module and services scope. The following code [snippet](https://github.com/fluencelabs/fce/tree/master/examples/url-downloader/curl_adapter) illustrates the implementation requirement.
|
||||
|
||||
```rust
|
||||
use fluence::fce;
|
||||
|
||||
use fluence::WasmLoggerBuilder;
|
||||
use fluence::MountedBinaryResult;
|
||||
|
||||
pub fn main() {
|
||||
WasmLoggerBuilder::new().build().unwrap();
|
||||
}
|
||||
|
||||
#[fce]
|
||||
pub fn download(url: String) -> String {
|
||||
log::info!("get called with url {}", url);
|
||||
|
||||
let result = unsafe { curl(vec![url]) };
|
||||
String::from_utf8(result.stdout).unwrap()
|
||||
}
|
||||
|
||||
#[fce]
|
||||
#[link(wasm_import_module = "host")]
|
||||
extern "C" {
|
||||
fn curl(cmd: Vec<String>) -> MountedBinaryResult;
|
||||
}
|
||||
```
|
||||
|
||||
with the following dependencies necessary in the Cargo.toml:
|
||||
|
||||
```rust
|
||||
fluence = { version = "=0.4.2", features = ["logger"] }
|
||||
log = "0.4.8"
|
||||
```
|
||||
|
||||
We are basically linking the [external](https://doc.rust-lang.org/std/keyword.extern.html) cUrl binary and are exposing access to it as a FCE interface called download.
|
||||
|
||||
### Code References
|
||||
|
||||
* [Mounted binaries](https://github.com/fluencelabs/fce/blob/c559f3f2266b924398c203a45863ebf2fb9252ec/fluence-faas/src/host_imports/mounted_binaries.rs)
|
||||
* [cUrl](https://github.com/curl/curl)
|
||||
|
||||
### Service Construction
|
||||
|
||||
In order to create a valid Fluence service, a service configuration is required.
|
||||
|
||||
```text
|
||||
modules_dir = "target/wasm32-wasi/release"
|
||||
|
||||
[[module]]
|
||||
name = "curl_adapter"
|
||||
logger_enabled = true
|
||||
|
||||
[mounted.mounted_binaries]
|
||||
curl = "/usr/bin/curl"
|
||||
```
|
||||
|
||||
We are specifying the location of the Wsasm file, the import name of the Wasm file, some logging housekeeping, and the mounted binary reference with the command-line call information.
|
||||
|
||||
### Remote Service Creation
|
||||
|
||||
```bash
|
||||
cargo new curl-service
|
||||
cd curl-service
|
||||
# copy the above rust code into src/main
|
||||
# copy the specified dependencies into Cargo.toml
|
||||
# copy the above service configuration into Config.toml
|
||||
|
||||
fce build --release
|
||||
```
|
||||
|
||||
You should have the Fluence module curl\_adapter.wasm in `target/wasm32-wasi/release` we can test our service with `fce-repl`.
|
||||
|
||||
### Service `Test`
|
||||
|
||||
Running the REPL, we use the `interface` command to list all available interfaces and the `call` command to run a method. Fr our purposes, we furnish the [https://duckduckgo.com/?q=Fluence+Labs](https://duckduckgo.com/?q=Fluence+Labs) url to give the the curl adapter a workout.
|
||||
|
||||
```bash
|
||||
fce-repl Config.toml
|
||||
Welcome to the FCE REPL (version 0.5.2)
|
||||
app service was created with service id = 8ad81c3a-8c5c-4730-80d1-c54cd177725d
|
||||
elapsed time 40.312376ms
|
||||
|
||||
1> interface
|
||||
Loaded modules interface:
|
||||
|
||||
curl_adapter:
|
||||
fn download(url: String) -> String
|
||||
|
||||
2> call curl_adapter download ["https://duckduckgo.com/?q=Fluence+Labs"]
|
||||
result: String("<!DOCTYPE html><html lang=\"en-US\" class=\"no-js has-zcm no-theme\"><head><meta name=\"description\" content=\"DuckDuckGo. Privacy, Simplified.\"><meta http-equiv=\"content-type\" content=\"text/html; charset=utf-8\"><title>Fluence Labs at DuckDuckGo</title><link rel=\"stylesheet\" href=\"/s1963.css\" type=\"text/css\"><link rel=\"stylesheet\" href=\"/r1963.css\" type=\"text/css\"><meta name=\"robots\" content=\"noindex,nofollow\"><meta name=\"referrer\" content=\"origin\"><meta name=\"apple-mobile-web-app-title\" content=\"Fluence Labs\"><link rel=\"preconnect\" href=\"https://links.duckduckgo.com\"><link rel=\"preload\" href=\"/font/ProximaNova-Reg-webfont.woff2\" as=\"font\" type=\"font/woff2\" crossorigin=\"anonymous\" /><link rel=\"preload\" href=\"/font/ProximaNova-Sbold-webfont.woff2\" as=\"font\" type=\"font/woff2\" crossorigin=\"anonymous\" /><link rel=\"shortcut icon\" href=\"/favicon.ico\" type=\"image/x-icon\" /><link id=\"icon60\" rel=\"apple-touch-icon\" href=\"/assets/icons/meta/DDG-iOS-icon_60x60.png?v=2\"/><link id=\"icon76\" rel=\"apple-touch-icon\" sizes=\"76x76\" href=\"/assets/icons/meta/DDG-iOS-icon_76x76.png?v=2\"/><link id=\"icon120\" rel=\"apple-touch-icon\" sizes=\"120x120\" href=\"/assets/icons/meta/DDG-iOS-icon_120x120.png?v=2\"/><link id=\"icon152\" rel=\"apple-touch-icon\"s
|
||||
<snip>
|
||||
ript\">DDG.index = DDG.index || {}; DDG.index.signalSummary = \"\";</script>")
|
||||
elapsed time: 334.545388ms
|
||||
|
||||
3>
|
||||
```
|
||||
|
@ -1,4 +0,0 @@
|
||||
# Developing a Frontend Application with JS-SDK
|
||||
|
||||
Coming soon. If you really need this section, contact us through any of the social media channels or Github.
|
||||
|
@ -1,51 +0,0 @@
|
||||
# Setting Up Your Environment
|
||||
|
||||
In order to develop within the Fluence solution, [Rust](https://www.rust-lang.org/tools/install) and small number of tools are required.
|
||||
|
||||
## Rust
|
||||
|
||||
Install Rust:
|
||||
|
||||
```bash
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
```
|
||||
|
||||
once Rust is installed, we need to expand the toolchain and include [nightly build](https://rust-lang.github.io/rustup/concepts/channels.html) and the [Wasm](https://doc.rust-lang.org/stable/nightly-rustc/rustc_target/spec/wasm32_wasi/index.html) compile target.
|
||||
|
||||
```bash
|
||||
rustup install nightly
|
||||
rustup target add wasm32-wasi
|
||||
```
|
||||
|
||||
To keep Rust and the toolchains updated:
|
||||
|
||||
```bash
|
||||
rustup self update
|
||||
rustup update
|
||||
```
|
||||
|
||||
There are a number of good Rust installation and IDE integration tutorials available. [DuckDuckGo](https://duckduckgo.com/) is your friend but if that's too much effort, have a look at [koderhq](https://www.koderhq.com/tutorial/rust/environment-setup/).
|
||||
|
||||
## Fluence Tools
|
||||
|
||||
Fluence provides several tools to support developers. Fluence cli, `flcli`, facilitates the compilation of modules to the necessary wasm32-wasi target. Fluence REPL, `fce-repl`, on the other hand, is a cli tool to test and experiment with FCE modules and services locally.
|
||||
|
||||
```bash
|
||||
cargo install fcli
|
||||
cargo +nightly install frepl
|
||||
```
|
||||
|
||||
In addition, Fluence provides the [proto-distributor](https://github.com/fluencelabs/proto-distributor) tool, aka `fldist`, for service lifecyle management. From deploying services to the network to executing AIR scripts, `fldist` does it all.
|
||||
|
||||
```bash
|
||||
npm install -g @fluencelabs/fldist
|
||||
```
|
||||
|
||||
## Fluence SDK
|
||||
|
||||
For frontend development, the Fluence [JS-SDK](https://github.com/fluencelabs/fluence-js) is currently the favored, and only, tool.
|
||||
|
||||
```bash
|
||||
npm install @fluencelabs/fluence
|
||||
```
|
||||
|
@ -1,4 +0,0 @@
|
||||
# Deploy A Private Fluence Network
|
||||
|
||||
Coming soon. If you really need this section, contact us through any of the social media channels or Github.
|
||||
|
@ -1,4 +0,0 @@
|
||||
# Securing Services
|
||||
|
||||
Coming soon. If you really need this section, contact us through any of the social media channels or Github.
|
||||
|
@ -1,153 +0,0 @@
|
||||
# Deploy A Local Fluence Node
|
||||
|
||||
A significant chunk of developing and testing of Fluence services can be accomplished on an isolated, local node. In this brief tutorial we set up a local, dockerized Fluence node and test its functionality. In subsequent tutorials, we cover the steps required to join an existing network or how to run your own network.
|
||||
|
||||
The fastest way to get a Fluence node up and running is to use [docker](https://docs.docker.com/get-docker/):
|
||||
|
||||
```bash
|
||||
docker run -d --name fluence -e RUST_LOG="info" -p 7777:7777 -p 9999:9999 -p 18080 fluencelabs/fluence
|
||||
```
|
||||
|
||||
where the `-d` flag runs the container in detached mode, `-e` flag sets the environment variables, `-p` flag exposes the ports: 7777 is the tcp port, 9999 the websocket port, and, optionally, 18080 the Prometheus port.
|
||||
|
||||
Once the container is up and running, we can tail the log \(output\) with
|
||||
|
||||
```bash
|
||||
docker logs -f fluence
|
||||
|
||||
[2021-03-11T01:31:17.574274Z INFO particle_node]
|
||||
+-------------------------------------------------+
|
||||
| Hello from the Fluence Team. If you encounter |
|
||||
| any troubles with node operation, please update |
|
||||
| the node via |
|
||||
| docker pull fluencelabs/fluence:latest |
|
||||
| |
|
||||
| or contact us at |
|
||||
| github.com/fluencelabs/fluence/discussions |
|
||||
+-------------------------------------------------+
|
||||
|
||||
[2021-03-11T01:31:17.575062Z INFO server_config::fluence_config] Loading config from "/.fluence/Config.toml"
|
||||
[2021-03-11T01:31:17.575461Z INFO server_config::keys] generating a new key pair
|
||||
[2021-03-11T01:31:17.575768Z WARN server_config::defaults] New management key generated. private in base64 = VE0jt68kqa2B/SMOd3VuuPd14O2WTmj6Dl//r6VM+Wc=; peer_id = 12D3KooWNGuGgQVUA6aJMGMGqkBCFmLZqMwmp6pzmv1WLYdi7gxN
|
||||
[2021-03-11T01:31:17.575797Z INFO particle_node] AIR interpreter: "./aquamarine_0.7.3.wasm"
|
||||
[2021-03-11T01:31:17.575864Z INFO particle_node::config::certificates] storing new certificate for the key pair
|
||||
[2021-03-11T01:31:17.577028Z INFO particle_node] public key = BRqbUhVD2XQ6YcWqXW1D21n7gPg15STWTG8C7pMLfqg2
|
||||
[2021-03-11T01:31:17.577848Z INFO particle_node::node] server peer id = 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx
|
||||
<snip>
|
||||
```
|
||||
|
||||
For future interaction with the node, we need to retain the server peer id 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx. And if you feel the need to snoop around the container:
|
||||
|
||||
```bash
|
||||
docker exec -it fluence bash
|
||||
```
|
||||
|
||||
will get you in.
|
||||
|
||||
Now that we have a local node, we can use the `fldist` tool to interact with it. From the Quick Start, you may recall that we need the node-id and node-addr:
|
||||
|
||||
* node-id: 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx
|
||||
* node-addr: /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx
|
||||
|
||||
Let's inspect our node and check for any available modules and interfaces:
|
||||
|
||||
```bash
|
||||
fldist get_modules --node-id 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx
|
||||
client seed: 43PmCycRqLt9h3t5Dbmkc3vpNjF9qrNDEVLvQhjCQYSj
|
||||
client peerId: 12D3KooWQXTe2aFzUsYFf9mBHe4poey45nmAoa8PQwCc2iy9BLMW
|
||||
node peerId: 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx
|
||||
[[]]
|
||||
|
||||
fldist get_interfaces --node-id 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx
|
||||
client seed: DGf3E48yr73tJbxXpfxyNiRNFsoeRgxKUCpUDYafkXaN
|
||||
client peerId: 12D3KooWEY37spzSbrg1GTFEo67p9X8cFqmYDHuzaBWWJ9aRT1G2
|
||||
node peerId: 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx
|
||||
60000
|
||||
[ [] ]
|
||||
to expand interfaces, use get_interfaces --expand
|
||||
```
|
||||
|
||||
Since we just initiated the node, we expect no modules and no interfaces and the `fldist` queries confirm our expectations. To further explore and validate the node, we can create a small [greeting](https://github.com/fluencelabs/fce/tree/master/examples/greeting) service.
|
||||
|
||||
```bash
|
||||
mkdir fluence-greeter
|
||||
cd fluence-greeeter
|
||||
# download the greeting.wasm file into this directory
|
||||
# https://github.com/fluencelabs/fce/blob/master/examples/greeting/artifacts/greeting.wasm -- Download button to the right
|
||||
echo '{ "name":"greeting"}' > greeting_cfg.json
|
||||
```
|
||||
|
||||
We just grabbed the greeting wasm file from the Fluence repo and created a service configuration file, greeting\_cfg.json, which allow us to create a new GreetingService:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx new_service --ms /Users/bebo/localdev/fce/examples/greeting/artifacts/greeting.wasm:greeting_cfg.json -n GreetingService
|
||||
client seed: 7VtMT7dbdfuU2ewWHEo42Ysg5B9KTB5gAgM8oDEs4kJk
|
||||
client peerId: 12D3KooWRSmoTL64JVXna34myzAuKWaGkjE6EBAb9gaR4hyyyQDM
|
||||
node peerId: 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx
|
||||
uploading blueprint GreetingService to node 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx via client 12D3KooWRSmoTL64JVXna34myzAuKWaGkjE6EBAb9gaR4hyyyQDM
|
||||
NON-CONSTANT BLUEPRINT ID: Expected blueprint id to be predefined as 88b9b328-7c2b-44fe-8f2c-01b52db12fd9, but it was generated by node as 94d02dfe696549a98e23c5de8713e7c6d6f91694e823790a2f6dcfcc93843be3
|
||||
service id: 64551400-6296-4701-8e82-daf0b4e02751
|
||||
service created successfully
|
||||
```
|
||||
|
||||
We now have a greeting service running on our node. As always, make a note of the service id, 64551400-6296-4701-8e82-daf0b4e02751.
|
||||
|
||||
```bash
|
||||
fldist get_modules --node-id 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx
|
||||
client seed: HXoV5UfoBAtT8vM2zibm6oiTt7ecFBbP3xSF2dec4RTF
|
||||
client peerId: 12D3KooWGJ8crCtYy4es835v5dVhTbD7snyLxCQupuiq2sLSXMyA
|
||||
node peerId: 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx
|
||||
[[{"config":{"logger_enabled":true,"logging_mask":null,"mem_pages_count":100,"mounted_binaries":null,"wasi":{"envs":null,"mapped_dirs":null,"preopened_files":[]}},"hash":"80a992ec969576289c61c4a911ba149083272166ffec2949d9d4a066532eec1d","name":"greeting"}]]
|
||||
```
|
||||
|
||||
Yep, checking once again for modules, the output confirms that the greeting service is available. Writing a small AIR script allows us to use the service:
|
||||
|
||||
```text
|
||||
(xor
|
||||
(seq
|
||||
(call relay (service "greeting") [name] result)
|
||||
(call %init_peer_id% (returnService "run") [result])
|
||||
)
|
||||
(call %init_peer_id% (returnService "run") [%last_error%])
|
||||
)
|
||||
```
|
||||
|
||||
Copy and save the script to greeting.clj and we can use our trusted `fldist` tool:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx run_air -p greeting.clj -d '{"service": "64551400-6296-4701-8e82-daf0b4e02751", "name":"Fluence"}'
|
||||
client seed: 8eXzEhypvkYST82sakeS4NeGFSyxqyCSpv2GQj3tQK5E
|
||||
client peerId: 12D3KooWLFqJwuHNe2kWF8SMgX6cm24L83JUADFcbrj5fC1z3b21
|
||||
node peerId: 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx
|
||||
Particle id: 14db3aff-b1a9-439e-8890-d0cdc9a0bacd. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
"Hi, Fluence"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx',
|
||||
service_id: '64551400-6296-4701-8e82-daf0b4e02751',
|
||||
function_name: 'greeting',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
Yep, our node and the tools are working as expected. Going back to the logs, we can further verify the script execution:
|
||||
|
||||
```bash
|
||||
docker logs -f fluence
|
||||
<snip>
|
||||
[2021-03-12T02:42:51.041267Z INFO aquamarine::particle_executor] Executing particle 14db3aff-b1a9-439e-8890-d0cdc9a0bacd
|
||||
[2021-03-12T02:42:51.041927Z INFO particle_closures::host_closures] Executed host call "64551400-6296-4701-8e82-daf0b4e02751" "greeting" (96us 700ns)
|
||||
[2021-03-12T02:42:51.046652Z INFO particle_node::network_api] Sent particle 14db3aff-b1a9-439e-8890-d0cdc9a0bacd to 12D3KooWLFqJwuHNe2kWF8SMgX6cm24L83JUADFcbrj5fC1z3b21 @ [/ip4/172.17.0.1/tcp/61636/ws]
|
||||
```
|
||||
|
||||
Looks like our node container and logging is up and running and ready for your development use. As the Fluence team is rapidly developig, make sure you stay up to date. Check the repo or [Docke rhub](https://hub.docker.com/r/fluencelabs/fluence) and update with `docker pull fluencelabs/fluence:latest`.
|
||||
|
||||
Happy composing!
|
||||
|
@ -1,4 +0,0 @@
|
||||
# TrustGraph In Action
|
||||
|
||||
Coming soon. If you really need this section, contact us through any of the social media channels or Github.
|
||||
|
Loading…
Reference in New Issue
Block a user