Merge pull request #14 from fluencelabs/2.0

2.0
This commit is contained in:
boneyard93501 2021-07-01 19:39:32 -05:00 committed by GitHub
commit e9e38da81c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 263 additions and 13 deletions

View File

@ -1,12 +1,20 @@
# Introduction
Fluence Labs builds distributed networks, development tools, components and support systems to allow developers to efficiently and reliably build, operate, maintain, and monetize distributed and decentralized services and applications. An integral component of the Fluence solution is [Aquamarine](https://github.com/fluencelabs/aquamarine), a programming language enabling peer-to-peer coordination for distributed applications and backends.
Fluence provides an open Web3 protocol, framework and tooling to develop and host applications, interfaces and backends on permissionless peer-to-peer networks.
Fluence is also developing [Marine](https://github.com/fluencelabs/marine) - a general purpose compute runtime for multi-module WebAssembly applications with WASI support and a [shared-nothing](https://en.wikipedia.org/wiki/Shared-nothing_architecture) linking scheme. FCE allows for the rapid development and deployment of portable Wasm services which can be composed and coordinated with Aquamarine into secure applications. Furthermore, Fluence has implemented innovations at the p2p [node](https://github.com/fluencelabs/fluence) level, such as [TrustGraph](https://github.com/fluencelabs/trust-graph), local development and testing tools, such as [Marine-Repl](https://github.com/fluencelabs/marine/tree/master/tools/repl), and support tooling, such as [Fluence Distributor](https://github.com/fluencelabs/proto-distributor).
The Fluence Web3 stack enables
In combination, Aquamarine and the Fluence runtimes and tools allow developers to avoid the typical peer-to-peer development challenges and accelerate the development and deployment of distributed services and applications.
* programmable network requests
* distributed applications from composition without centralization
* communication, access and transactional security as first class citizens
* extensibility through adapter/wrapper services
* efficiencies and improved time to market arising from the reuse of deployed services and significantly reduced devops requirements
The remainder of this document introduces a set of incremental, hands-on tutorials developing with Aquamarine on the Fluence stack.
by decoupling business logic from composition, security from business logic and resource management from infrastructure. See Figure 1.
Figure 1: Decentralized Applications Composed From Distributed Services On P2P Nodes ![](https://i.imgur.com/XxC7NN3.png)
An integral component of the Fluence solution is the Aquamarine stack comprised of Aqua and Marine. Aqua is a programming language and runtime environment for peer-to-peer workflows. Marine, on the other hand, is a general purpose runtime and associated tooling for multi-module Wasm applications with WASI support and a shared-nothing linking scheme. That is, Marine runs hosted code on nodes and Aqua facilitates the programming of workflows composed from hosted code. In combination, Aqua and Marine enable any distributed application.
Additional resources and support are available:

View File

@ -2,6 +2,8 @@
* [Introduction](README.md)
* [Thinking In Distributed](p2p.md)
* [Concepts](concepts.md)
* [Quick Start](quick-start.md)
* [Quick Start](quick_start/README.md)
* [Setup](quick_start/quick_start_setup.md)
* [Using a Service](quick_start/quick_start_using_a_service.md)

116
concepts.md Normal file
View File

@ -0,0 +1,116 @@
# Concepts
The Fluence solution enables a new class of decentralized Web3 solutions providing technical, security and business benefits otherwise not available. In order for solution architects and developers to realize these benefits, a shift in philosophy and implementation is required. With the Fluence tool chain available, developers should find it possible to code meaningful Web3 solutions in short order once an understanding of the core concepts and Aqua is in place.
The remainder of this section introduces the core concepts underlying the Fluence solution.
**Particles**
Particles are Fluence's secure distributed state medium, i.e., conflict free replication data structures containing application data, workflow scripts and some metadata, that traverse programmatically specified routes in a highly secure manner. That is, _particles_ hop from distributed compute service to distributed compute service across the peer-to-peer network as specified by the application workflow updating along the way.
Figure 4: Node-Service Perspective Of Particle Workflow ![](https://i.imgur.com/u4beJgh.png)
Not surprisingly, particles are an integral part of the Fluence protocol and stack. It is the very decoupling of data + workflow instructions from the service and network components that allows the secure composition of applications from services distributed across a permissionless peer-to-peer network.
While the application state change resulting from passing a particle "through" a service with respect of the data components is quite obvious, the ensuing state change with respect to the workflow also needs to be recognized, which is handled by the Aqua VM.
As depicted in Figure 4, a particle traverses to a destination node's Aqua VM where the next execution step is evaluated and, if specified, triggered. That is, the service programmatically specified to operate on the particle's data is called from the Aqua VM, the particle's data and workflow \(step\) are updated and the Aqua VM routes the particle to the next specified destination, which may be on the same, another or the client peer.
**Aqua**
An integral enabler of the Fluence solution is Aqua, an open source language purpose-built to enable developers to ergonomically program distributed networks and applications by composition. Aqua scripts compile to an intermediary representation, called AIR, which execute on the Aqua Virtual Machine, Aqua VM, itself a Wasm module hosted on the Marine interpreter on every peer node.
Figure 5: From Aqua Script To Particle Execution
![](.gitbook/assets/image%20%286%29.png)
Currently, compiled Aqua scripts can be executed from Typescript clients based on [Fluence SDK](https://github.com/fluencelabs/fluence-js). For more information about Aqua, see the [Aqua book](https://doc.fluence.dev/aqua-book/).
**Marine**
Marine is Fluence's generalized Wasm runtime executing Wasm Interface Type \(IT\) modules with Aqua VM compatible interfaces on each peer. Let's unpack.
Services behave similarly to microservices: They are created on nodes and served by the Marine VM and can _only_ be called by Aqua VM. They also are passive in that they can accept incoming calls but can't initiate an outgoing request without being called.
Services are
* comprised of Wasm IT modules that can be composed into applications
* developed in Rust for a wasm32-wasi compile target
* deployed on one or more nodes
* running on the Marine VM which is deployed to every node
Figure 6: Stylized Execution Flow On Peer
![](.gitbook/assets/image%20%285%29.png)
Please note that the Aqua VM is itself a Wasm module running on the Marine VM.
The [Marine Rust SDK](https://github.com/fluencelabs/marine-rs-sdk) abstracts the Wasm IT implementation detail behind a handy macro that allows developers to easily create Marine VM compatible Wasm modules. In the example below, applying the `marine` macro turns a basic Rust function into a Wasm IT compatible function and enforces the types requirements at the compiler level.
```rust
#[marine]
pub fn greeting(name: String) -> String {
format!("Hi, {}", name)
}
```
**Service Creation**
Services are logical constructs instantiated from Wasm modules that contain some business logic and configuration data. That is, services are created, i.e., linked, at the Marine VM runtime level from uploaded Wasm modules and the relevant metadata
_Blueprints_ are json documents that provide the necessary information to build a service from the associated Wasm modules.
Figure 7: Service Composition and Execution Model
![](.gitbook/assets/image%20%287%29.png)
Services section that services are not capable to accept more than one request at a given time.
**Modules**
In the Fluence solution, Wasm IT modules take one of three forms:
* Facade Module: expose the API of the service comprised of one or more modules. Every service has exactly one facade module
* Pure Module: perform computations without side-effects
* Effector Module: perform at least one computation with a side-effect
It is important for architects and developers to be aware of their module and services construction with respect to state changes.
**Authentication And Permissioning**
Authentication at the service API level is an inherent feature of the Fluence solution. This fine-grained approach essentially provides [ambient authority](https://en.wikipedia.org/wiki/Ambient_authority) out of the box.
In the Fluence solution, this is accomplished by a SecurityTetraplet, which is data structure with four data fields:
```rust
struct SecurityTetraplet:
peer_id: string
service_id: string
fn_name: string
getter: string
```
SecurityTetraplets are provided with the function call arguments for each \(service\) function call and are checked by the called service. Hence, authentication based on the **\(service caller id == service owner id\)** relation can be established at service ingress and leveraged to build powerful, fine-grained identity and access management solutions enabling true zero trust architectures.
**Trust Layer**
Since we're not really ready, should we cut this section?
The Fluence protocol offers an alternative to node selection, i.e. connection and permissioning, approaches, such as Kademlia, called TrustGraph. A TrustGraph is comprised of subjectively weights assigned to nodes to manage peer connections. TrustGepahs are node operator specific and transitive. That is, a trusted node's trusted neighbors are considered trustworthy.
**Scaling Apps**
As discussed previously, decoupling at the network and business logic levels is at the core of the Fluence protocol and provides the major entry points for scaling solutions.
At the peer-to-peer network level, scaling can be achieved with subnetworks. Subnetworks are currently under development and we will update this section in the near future.
At the service level, we can achieve scale through parallelization due to the decoupling of resource management from infrastructure. That is, seqential and parallel execution flow logic are an inherent part of Aqua's programming model. In order to be able to achieve concurrency, the target services need to be available on multiple peers as module calls are blocking.
Figure 8: Stylized Par Execution
![](.gitbook/assets/image%20%288%29.png)

26
p2p.md
View File

@ -1,16 +1,28 @@
# Thinking In Distributed
Building and operating distributed networks, backends and applications are non-trivial undertakings not only posing technical challenges but also requiring a significant shift in how to view and think in distributed.
Permissionless peer-to-peer networks have a lot to offer to developers and solution architects such as decentralization, improved request-response data models and zero trust security at the application and service level. Of course, these capabilities and benefits don't just arise from putting libp2p to work. Instead, a peer-to-peer overlay is required. The Fluence protocol provides such an overlay enabling a powerful distributed data routing and management protocol allowing developers to implement modern and secure Web3 solutions.
Consider a workflow tasked with calling multiple REST endpoints, in sequence, where the response of the previous call is the input to the current call. As illustrated in Figure 1, the application is the focal point and data relay.
**Improved Request-Response Model**
![Figure 1: Stylized Data Flow For Application With Multiple Endpoint Calls](.gitbook/assets/image%20%283%29.png)
In some network models, such as client server, the request-response model generally entails a response returning to the request client. For example, a client application tasked to conduct a credit check of a customer and to inform them with a SMS typically would call a credit check API, consume the response, and then call a SMS API to send the necessary SMS.
Programming a frontend application in the Fluence peer-to-peer solution, an application is not a workflow intermediary but merely the initiator of a workflow as workflow logic and data traverses the network from service to service. See Figure 2 for an illustration and please note that services may be deployed to different nodes as well as to more than one node.
Figure 2: Client Server Request Response Model
![Figure 2: Stylized Data Flow For Application With Fluence Distributed Services ](.gitbook/assets/image%20%284%29.png)
![](https://i.imgur.com/ZYLUzne.png)
In Fluence parlance, we call the workflow + data construct a _particle_ where the workflow is expressed in an AIR script.
The Fluence peer-to-peer protocol, on the other hand, allows for a much more effective Request-Response processing pattern where responses are forward-chained to the next consuming service\(s\) without having to make the return trip to the client. See Figure 3.
This is a rather cursory overview of probably the most salient conceptual difference developers need to take into consideration in order to succeed in a distributed ecosystem. Luckily, Aquamarine and the Fluence stack eliminate most, if not all, of the heavy lifting necessary to develop high-value per-to-peer networks, backends, and applications.
Figure 3: Fluence P2P Protocol Request Response Model
![](https://i.imgur.com/g3RGBRf.png)
In a Fluence p2p implementation, our client application would call a credit check API deployed or proxy-ed on some peer and then send the response directly to the SMS API service possibly deployed on another peer -- similar to the flow depicted in Figure 1.
Such a significantly flattened request-response model leads to much lower resource requirements for applications in terms of bandwidth and processing capacity thereby enabling a vast class of "thin" clients ranging from browsers to IoT and edge devices truly enabling decentralized machine-to-machine communication.
**Zero Trust Security**
The [zero trust security model](https://en.wikipedia.org/wiki/Zero_trust_security_model) assumes the worst reality, i.e., a breach, and proposes a "never trust, always verify" approach. This approach is inherent in the Fluence peer-to-peer protocol and Aqua programming model as every service request can be authenticated at the service API level.
Overall, the Fluence solution enables a modern Web3 runtime and development environment on top of a peer-to-peer stack that allows developers to build powerful and secure distributed applications on thin clients and powerful servers alike.

2
quick-start.md Normal file
View File

@ -0,0 +1,2 @@
# Quick Start

View File

@ -1,6 +1,116 @@
# Quick Start
This section aims to get a new Fluence application developer up and running relatively quickly without any major deep dives into Aquamarine or the Fluence stack. Instead, the focus is to present the essential knowledge necessary to successfully interact with a Fluence peer-to-peer network and to get a good understanding on how to use Aquamarine to compose network services into applications.
Enjoy !!
The Fluence solution enables a new class of decentralized Web3 solutions providing technical, security and business benefits otherwise not available. In order for solution architects and developers to realize these benefits, a shift in philosophy and implementation is required. With the Fluence tool chain available, developers should find it possible to code meaningful Web3 solutions in short order once an understanding of the core concepts and Aqua is in place.
The remainder of this section introduces the core concepts underlying the Fluence solution.
**Particles**
Particles are Fluence's secure distributed state medium, i.e., conflict free replication data structures containing application data, workflow scripts and some metadata, that traverse programmatically specified routes in a highly secure manner. That is, _particles_ hop from distributed compute service to distributed compute service across the peer-to-peer network as specified by the application workflow updating along the way.
Figure 4: Node-Service Perspective Of Particle Workflow ![](https://i.imgur.com/u4beJgh.png)
Not surprisingly, particles are an integral part of the Fluence protocol and stack. It is the very decoupling of data + workflow instructions from the service and network components that allows the secure composition of applications from services distributed across a permissionless peer-to-peer network.
While the application state change resulting from passing a particle "through" a service with respect of the data components is quite obvious, the ensuing state change with respect to the workflow also needs to be recognized, which is handled by the Aqua VM.
As depicted in Figure 4, a particle traverses to a destination node's Aqua VM where the next execution step is evaluated and, if specified, triggered. That is, the service programmatically specified to operate on the particle's data is called from the Aqua VM, the particle's data and workflow \(step\) are updated and the Aqua VM routes the particle to the next specified destination, which may be on the same, another or the client peer.
**Aqua**
An integral enabler of the Fluence solution is Aqua, an open source language purpose-built to enable developers to ergonomically program distributed networks and applications by composition. Aqua scripts compile to an intermediary representation, called AIR, which execute on the Aqua Virtual Machine, Aqua VM, itself a Wasm module hosted on the Marine interpreter on every peer node.
Figure 5: From Aqua Script To Particle Execution
![](../.gitbook/assets/image%20%286%29.png)
Currently, compiled Aqua scripts can be executed from Typescript clients based on [Fluence SDK](https://github.com/fluencelabs/fluence-js). For more information about Aqua, see the [Aqua book](https://doc.fluence.dev/aqua-book/).
**Marine**
Marine is Fluence's generalized Wasm runtime executing Wasm Interface Type \(IT\) modules with Aqua VM compatible interfaces on each peer. Let's unpack.
Services behave similarly to microservices: They are created on nodes and served by the Marine VM and can _only_ be called by Aqua VM. They also are passive in that they can accept incoming calls but can't initiate an outgoing request without being called.
Services are
* comprised of Wasm IT modules that can be composed into applications
* developed in Rust for a wasm32-wasi compile target
* deployed on one or more nodes
* running on the Marine VM which is deployed to every node
Figure 6: Stylized Execution Flow On Peer
![](../.gitbook/assets/image%20%285%29.png)
Please note that the Aqua VM is itself a Wasm module running on the Marine VM.
The [Marine Rust SDK](https://github.com/fluencelabs/marine-rs-sdk) abstracts the Wasm IT implementation detail behind a handy macro that allows developers to easily create Marine VM compatible Wasm modules. In the example below, applying the `marine` macro turns a basic Rust function into a Wasm IT compatible function and enforces the types requirements at the compiler level.
```rust
#[marine]
pub fn greeting(name: String) -> String {
format!("Hi, {}", name)
}
```
**Service Creation**
Services are logical constructs instantiated from Wasm modules that contain some business logic and configuration data. That is, services are created, i.e., linked, at the Marine VM runtime level from uploaded Wasm modules and the relevant metadata
_Blueprints_ are json documents that provide the necessary information to build a service from the associated Wasm modules.
Figure 7: Service Composition and Execution Model
![](../.gitbook/assets/image%20%287%29.png)
Services section that services are not capable to accept more than one request at a given time.
**Modules**
In the Fluence solution, Wasm IT modules take one of three forms:
* Facade Module: expose the API of the service comprised of one or more modules. Every service has exactly one facade module
* Pure Module: perform computations without side-effects
* Effector Module: perform at least one computation with a side-effect
It is important for architects and developers to be aware of their module and services construction with respect to state changes.
**Authentication And Permissioning**
Authentication at the service API level is an inherent feature of the Fluence solution. This fine-grained approach essentially provides [ambient authority](https://en.wikipedia.org/wiki/Ambient_authority) out of the box.
In the Fluence solution, this is accomplished by a SecurityTetraplet, which is data structure with four data fields:
```rust
struct SecurityTetraplet:
peer_id: string
service_id: string
fn_name: string
getter: string
```
SecurityTetraplets are provided with the function call arguments for each \(service\) function call and are checked by the called service. Hence, authentication based on the **\(service caller id == service owner id\)** relation can be established at service ingress and leveraged to build powerful, fine-grained identity and access management solutions enabling true zero trust architectures.
**Trust Layer**
Since we're not really ready, should we cut this section?
The Fluence protocol offers an alternative to node selection, i.e. connection and permissioning, approaches, such as Kademlia, called TrustGraph. A TrustGraph is comprised of subjectively weights assigned to nodes to manage peer connections. TrustGepahs are node operator specific and transitive. That is, a trusted node's trusted neighbors are considered trustworthy.
**Scaling Apps**
As discussed previously, decoupling at the network and business logic levels is at the core of the Fluence protocol and provides the major entry points for scaling solutions.
At the peer-to-peer network level, scaling can be achieved with subnetworks. Subnetworks are currently under development and we will update this section in the near future.
At the service level, we can achieve scale through parallelization due to the decoupling of resource management from infrastructure. That is, seqential and parallel execution flow logic are an inherent part of Aqua's programming model. In order to be able to achieve concurrency, the target services need to be available on multiple peers as module calls are blocking.
Figure 8: Stylized Par Execution
![](../.gitbook/assets/image%20%288%29.png)