mirror of
https://github.com/fluencelabs/aqua-book
synced 2024-12-04 23:30:18 +00:00
GitBook: [alpha] 23 pages modified
This commit is contained in:
parent
ba3407058f
commit
8e9f9e3ef2
@ -2,19 +2,7 @@
|
||||
|
||||
[Aqua](https://github.com/fluencelabs/aqua), part of Fluence Lab's Aquamarine Web3 stack, is a purpose-built language to program peer-to-peer networks and compose distributed services hosted on peer-to-peer nodes into applications and backends.
|
||||
|
||||
In addition to the language specification, Aqua provides a compiler, which produces Aqua Intermediary Representation \(AIR\) and an execution stack, Aqua VM, that is part of every Fluence node implementation to execute AIR. Moreover, Aqua VM it a Wasm module that runs of ... and
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
### Help And Support
|
||||
|
||||
* [Discord](https://discord.gg/)
|
||||
* [Telegram](https://t.me/fluence_project)
|
||||
* [https://github.com/fluencelabs/aqua](https://github.com/fluencelabs/aqua)
|
||||
|
||||
|
||||
In addition to the language specification, Aqua provides a compiler, which produces Aqua Intermediary Representation \(AIR\) and an execution stack, Aqua VM, that is part of every Fluence node implementation to execute AIR.
|
||||
|
||||
|
||||
|
||||
|
@ -1,28 +1,5 @@
|
||||
# Installation
|
||||
|
||||
Both the Aqua compiler and support library can be installed natively with `npm`
|
||||
|
||||
To install the compiler:
|
||||
|
||||
```bash
|
||||
npm -g install @fluencelabs/aqua-cli
|
||||
```
|
||||
|
||||
and to make the Aqua library available to Typescript applications:
|
||||
|
||||
```bash
|
||||
npm -g install @fluencelabs/aqua-lib
|
||||
```
|
||||
|
||||
Moreover, a VSCode syntax-highlighting extension is available. In VSCode, click on the Extensions button, search for `aqua`and install the extension.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
* native tools
|
||||
* devcontianer ?
|
||||
|
||||
|
@ -3,7 +3,7 @@
|
||||
Every Fluence reference node comes with a set of builtin services which are accessible to Aqua programs. Let's use those readily available services to get the timestamp of a few of our peer-to-peer neighborhood nodes with Aqua.
|
||||
|
||||
{% tabs %}
|
||||
{% tab title="Peer Timestamps With Aqua" %}
|
||||
{% tab title="Timestamps With Aqua" %}
|
||||
```text
|
||||
-- timestamp_getter.aqua
|
||||
-- bring the builtin services into scope
|
||||
@ -66,9 +66,9 @@ fldist run_air -p air-scripts/timestamp_getter.ts_getter.air -d '{"node":"12D3K
|
||||
{% endtab %}
|
||||
{% endtabs %}
|
||||
|
||||
In the Aqua code, we essentially create a workflow originating from our client peer to enumerate our neighbor peers from the Kademlia neighborhood based on our reference node specification, call on the builtin timestamp service on each peer and in parallel, join after we collect ten timestamps and return our u64 array of timestamps back to the client peer.
|
||||
The Aqua script essentially creates a workflow originating from our client peer to enumerate our neighbor peers from the Kademlia neighborhood based on our reference node specification, calls on the builtin timestamp service on each peer in parallel, joins the results stream after we collect ten timestamps and return our u64 array of timestamps back to the client peer.
|
||||
|
||||
Once we have our file, lets copy it to a directory called aqua-scripts and create an empty directory. air-scripts. Now we can compile our aqua script with the aqua-cli tool and find our AIR file in the air-scripts directory:
|
||||
See the [ts-oracle example](https://github.com/fluencelabs/examples/tree/main/ts-oracle) for the corresponding Aqua files in the `aqua-script` directory. Now that we have our script, let's compile it with the aqua-cli tool and find our AIR file in the air-scripts directory:
|
||||
|
||||
{% tabs %}
|
||||
{% tab title="Compile" %}
|
||||
@ -85,11 +85,13 @@ timestamp_getter.ts_getter.air
|
||||
{% endtab %}
|
||||
{% endtabs %}
|
||||
|
||||
Once we have our AIR file we can either use Typescript or command line client. Let's use the command line client `flidst`:
|
||||
Once we have our AIR file we can either use a Typescript or command line client. Let's use the command line client `flidst`:
|
||||
|
||||
{% tabs %}
|
||||
{% tab title="Run Air scripts" %}
|
||||
```text
|
||||
# if you don't have fldist on your machine:
|
||||
# npm -g install @fluencelabs/fldist
|
||||
# execute the AIR script from our compile phase with a peer id
|
||||
fldist run_air -p air-scripts/timestamp_getter.ts_getter.air -d '{"node":"12D3KooWHLxVhUQyAuZe6AHMB29P7wkvTNMn7eDMcsqimJYLKREf"}' --generated
|
||||
```
|
||||
@ -116,5 +118,5 @@ fldist run_air -p air-scripts/timestamp_getter.ts_getter.air -d '{"node":"12D3K
|
||||
{% endtab %}
|
||||
{% endtabs %}
|
||||
|
||||
And that's it. We now have ten reasonable timestamps from our Kademlia neighbors. See the [ts-oracle example](https://github.com/fluencelabs/examples/tree/main/ts-oracle) for the corresponding Aqua and Air files, respectively.
|
||||
And that's it. We now have ten timestamps right from our Kademlia neighbors.
|
||||
|
||||
|
@ -52,3 +52,5 @@ Reference:
|
||||
|
||||
* [Expressions](expressions/)
|
||||
|
||||
|
||||
|
||||
|
@ -1,12 +1,12 @@
|
||||
# CRDT Streams
|
||||
|
||||
In Aqua, an ordinary value is a name that points to a single result:
|
||||
In Aqua, ordinary value is a name that points to a single result:
|
||||
|
||||
```text
|
||||
value <- foo()
|
||||
```
|
||||
|
||||
A stream , on the other hand, is a name that points to zero or more results:
|
||||
Stream is a name that points to a number of results \(zero or more\):
|
||||
|
||||
```text
|
||||
value: *string
|
||||
@ -14,7 +14,7 @@ value <- foo()
|
||||
value <- foo()
|
||||
```
|
||||
|
||||
Stream is a kind of [collection](types.md#collection-types) and can be used in place of other collections:
|
||||
Stream is a kind of [collection](types.md#collection-types), and can be used where other collections are:
|
||||
|
||||
```text
|
||||
func foo(peer: string, relay: ?string):
|
||||
@ -34,15 +34,15 @@ func bar(peer: string, relay: string):
|
||||
foo(peer, relayMaybe)
|
||||
```
|
||||
|
||||
But the most powerful use of streams pertains to their use with parallel execution, which incurs non-determinism.
|
||||
But the most powerful uses of streams come along with parallelism, which incurs non-determinism.
|
||||
|
||||
### Streams: Lifecycle And Guarantees
|
||||
### Streams lifecycle and guarantees
|
||||
|
||||
A stream's lifecycle can be separated into three stages:
|
||||
Streams lifecycle can be divided into three stages:
|
||||
|
||||
* Source: \(Parallel\) Writes to a stream
|
||||
* Map: Handling the stream values
|
||||
* Sink: Converting the resulting stream into a scalar
|
||||
* Sink: Converting the resulting stream into scalar
|
||||
|
||||
Consider the following example:
|
||||
|
||||
@ -50,7 +50,7 @@ Consider the following example:
|
||||
func foo(peers: []string) -> string:
|
||||
resp: *string
|
||||
|
||||
-- Go to all peers in parallel
|
||||
-- Will go to all peers in parallel
|
||||
for p <- peers par:
|
||||
on p:
|
||||
-- Do something
|
||||
@ -71,15 +71,15 @@ func foo(peers: []string) -> string:
|
||||
|
||||
```
|
||||
|
||||
In this case, for each peer in peers, something is going to be written into `resp` stream.
|
||||
In this case, for each peer in peers, something is going to be written into resp stream.
|
||||
|
||||
Every peer `p` in peers does not know anything about how the other iterations proceed.
|
||||
Every peer p in peers does not know anything about how the other iterations proceed.
|
||||
|
||||
Once something is written to `resp` stream, the second for is triggered. This is the mapping stage.
|
||||
Once something is written to resp stream, the second for is triggered. It's the mapping stage.
|
||||
|
||||
And then the results are sent to the first peer, to call Op.identity there. This Op.identity waits until element number 5 is defined on `resp2` stream.
|
||||
And then the results are sent to the first peer, to call Op.identity there. This Op.identity waits until element number 5 is defined on resp2 stream.
|
||||
|
||||
When the join is complete, the stream is consumed by the concatenation service to produce a scalar value, which is returned.
|
||||
When it is, stream as a whole is consumed to produce a scalar value, which is returned.
|
||||
|
||||
During execution, involved peers have different views on the state of execution: each of the `for` parallel branches have no view or access to the other branches' data and eventually, the execution flows to the initial peer. The initial peer then merges writes to the `resp` stream and to the `resp2` stream, respectively. These writes are done in conflict-free fashion. Furthermore, the respective heads of the `resp`, `resp2` streams will not change from each peer's point of view as they are immutable and new values can only be appended. However, different peers may have a different order of the stream values depending on the order of receiving these values.
|
||||
During execution, involved peers have different views on the state of execution: parallel branches of for have no access to each other's data. Finally, execution flows to the initial peer. Initial peer merges writes to the resp stream, and merges writes to the resp2 stream. It's done in conflict-free fashion. More than that, head of resp, resp2 streams will not change from each peer's point of view: it's immutable, and new values are only appended. However, different peers may have different order of the stream values, depending on the order of receiving these values.
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Header
|
||||
|
||||
## Header expressions
|
||||
### Header expressions
|
||||
|
||||
`import`
|
||||
|
||||
|
@ -4,6 +4,8 @@ description: Static configuration pieces that affect compilation
|
||||
|
||||
# Overrideable constants
|
||||
|
||||
|
||||
|
||||
`const`
|
||||
|
||||
Constant definition.
|
||||
|
@ -126,7 +126,7 @@ What will happen when execution comes to `baz`?
|
||||
|
||||
Actually, the script will be executed twice: first time it will be sent from `peer1`, and second time – from `peer2`. Or another way round: `peer2` then `peer1`, we don't know who is faster.
|
||||
|
||||
When execution will get to `baz` for the first time, [Aqua VM]() will realize that it lacks some data that is expected to be computed above in the parallel branch. And halt.
|
||||
When execution will get to `baz` for the first time, [Aqua VM](../../runtimes/aqua-vm.md) will realize that it lacks some data that is expected to be computed above in the parallel branch. And halt.
|
||||
|
||||
After the second branch executes, VM will be woken up again, reach the same piece of code and realize that now it has enough data to proceed.
|
||||
|
||||
|
@ -4,11 +4,12 @@ description: Define where the code is to be executed and how to get there
|
||||
|
||||
# Topology
|
||||
|
||||
|
||||
Aqua lets developers to describe the whole distributed workflow in a single script, link data, recover from errors, implement complex patterns like backpressure, and more. Hence, topology is at the heart of Aqua.
|
||||
|
||||
Topology in Aqua is declarative: You just need to say where a piece of code must be executed, on what peer, and optionally how to get there. he Aqua compiler will add all the required network hops.
|
||||
|
||||
## On expression
|
||||
### On expression
|
||||
|
||||
`on` expression moves execution to the specified peer:
|
||||
|
||||
@ -27,7 +28,7 @@ on myPeer:
|
||||
baz()
|
||||
```
|
||||
|
||||
## `%init_peer_id%`
|
||||
### `%init_peer_id%`
|
||||
|
||||
There is one custom peer ID that is always in scope: `%init_peer_id%`. It points to the peer that initiated this request.
|
||||
|
||||
@ -35,7 +36,7 @@ There is one custom peer ID that is always in scope: `%init_peer_id%`. It points
|
||||
Using `on %init_peer_id%` is an anti-pattern: There is no way to ensure that init peer is accessible from the currently used part of the network.
|
||||
{% endhint %}
|
||||
|
||||
## More complex scenarios
|
||||
### More complex scenarios
|
||||
|
||||
Consider this example:
|
||||
|
||||
@ -69,7 +70,7 @@ Declarative topology definition always works the same way.
|
||||
* `bar(2)` is executed on `"peer baz"`, despite the fact that foo does topologic transition. `bar(2)` is in the scope of `on "peer baz"`, so it will be executed there
|
||||
* `bar(3)` is executed where `bar(1)` was: in the root scope of `baz`, wherever it was called from
|
||||
|
||||
## Accessing peers `via` other peers
|
||||
### Accessing peers `via` other peers
|
||||
|
||||
In a distributed network it is quite common that a peer is not directly accessible. For example, a browser has no public network interface and you cannot open a socket to a browser at will. Such constraints warrant a `relay` pattern: there should be a well-connected peer that relays requests from a peer to the network and vice versa.
|
||||
|
||||
@ -136,7 +137,7 @@ foo()
|
||||
|
||||
When the `on` scope is ended, it does not affect any further topology moves. Until you stop indentation, `on` affects the topology and may add additional topology moves, which means more roundtrips and unnecessary latency.
|
||||
|
||||
## Callbacks
|
||||
### Callbacks
|
||||
|
||||
What if you want to return something to the initial peer? For example, implement a request-response pattern. Or send a bunch of requests to different peers, and render responses as they come, in any order.
|
||||
|
||||
@ -191,7 +192,7 @@ func baz():
|
||||
Passing service function calls as arguments is very fragile as it does not track that a service is resolved in the scope of the call. Abilities variance may fix that.
|
||||
{% endhint %}
|
||||
|
||||
## Parallel execution and topology
|
||||
### Parallel execution and topology
|
||||
|
||||
When blocks are executed in parallel, it is not always necessary to resolve the topology to get to the next peer. The compiler will add topologic hops from the par branch only if data defined in that branch is used down the flow.
|
||||
|
||||
@ -199,3 +200,5 @@ When blocks are executed in parallel, it is not always necessary to resolve the
|
||||
What if all branches do not return? Execution will halt. Be careful, use `co` if you don't care about the returned data.
|
||||
{% endhint %}
|
||||
|
||||
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Types
|
||||
|
||||
## Scalars
|
||||
### Scalars
|
||||
|
||||
Scalar types follow the Wasm IT notation.
|
||||
|
||||
@ -12,11 +12,11 @@ Scalar types follow the Wasm IT notation.
|
||||
* Records \(product type\): see below
|
||||
* Arrays: see Collection Types below
|
||||
|
||||
## Literals
|
||||
### Literals
|
||||
|
||||
You can pass booleans \(true, false\), numbers, double-quoted strings as literals.
|
||||
|
||||
## Products
|
||||
### Products
|
||||
|
||||
```python
|
||||
data ProductName:
|
||||
@ -29,7 +29,7 @@ data OtherProduct:
|
||||
|
||||
Fields are accessible with the dot operator `.` , e.g. `product.field`.
|
||||
|
||||
## Collection Types
|
||||
### Collection Types
|
||||
|
||||
Aqua has three different types with variable length, denoted by quantifiers `[]`, `*`, and `?`.
|
||||
|
||||
@ -41,6 +41,7 @@ Appendable collection with 0..N values: `*`
|
||||
|
||||
Any data type can be prepended with a quantifier, e.g. `*u32`, `[][]string`, `?ProductType` are all correct type specifications.
|
||||
|
||||
|
||||
You can access a distinct value of a collection with `!` operator, optionally followed by an index.
|
||||
|
||||
Examples:
|
||||
@ -59,7 +60,7 @@ maybe_value: ?string
|
||||
value = maybe_value!
|
||||
```
|
||||
|
||||
## Arrow Types
|
||||
### Arrow Types
|
||||
|
||||
Every function has an arrow type that maps a list of input types to an optional output type.
|
||||
|
||||
@ -81,7 +82,7 @@ arrow()
|
||||
x <- arrow()
|
||||
```
|
||||
|
||||
## Type Alias
|
||||
### Type Alias
|
||||
|
||||
For convenience, you can alias a type:
|
||||
|
||||
@ -89,12 +90,13 @@ For convenience, you can alias a type:
|
||||
alias MyAlias = ?string
|
||||
```
|
||||
|
||||
## Type Variance
|
||||
### Type Variance
|
||||
|
||||
Aqua is made for composing data on the open network. That means that you want to compose things if they do compose, even if you don't control its source code.
|
||||
|
||||
Therefore Aqua follows the structural typing paradigm: if a type contains all the expected data, then it fits. For example, you can pass `u8` in place of `u16` or `i16`. Or `?bool` in place of `[]bool`. Or `*string` instead of `?string` or `[]string`. The same holds for products.
|
||||
|
||||
|
||||
For arrow types, Aqua checks the variance on arguments and contravariance on the return type.
|
||||
|
||||
```text
|
||||
@ -128,7 +130,7 @@ bar(foo4)
|
||||
|
||||
Arrow type `A: D -> C` is a subtype of `A1: D1 -> C1`, if `D1` is a subtype of `D` and `C` is a subtype of `C1`.
|
||||
|
||||
## Type Of A Service And A File
|
||||
### Type Of A Service And A File
|
||||
|
||||
A service type is a product of arrows.
|
||||
|
||||
@ -159,3 +161,5 @@ data MyServiceType:
|
||||
|
||||
{% embed url="https://github.com/fluencelabs/aqua/blob/main/types/src/main/scala/aqua/types/Type.scala" caption="See the types system implementation" %}
|
||||
|
||||
|
||||
|
||||
|
@ -18,7 +18,7 @@ on "peer 1":
|
||||
|
||||
More on that in the Security section. Now let's see how we can work with values inside the language.
|
||||
|
||||
## Arguments
|
||||
### Arguments
|
||||
|
||||
Function arguments are available within the whole function body.
|
||||
|
||||
@ -31,7 +31,7 @@ func foo(arg: i32, log: string -> ()):
|
||||
log("Wrote arg to responses")
|
||||
```
|
||||
|
||||
## Return values
|
||||
### Return values
|
||||
|
||||
You can assign results of an arrow call to a name, and use this returned value in the code below.
|
||||
|
||||
@ -55,7 +55,7 @@ func foo(arg: i32, log: *string):
|
||||
log <- bar(arg)
|
||||
```
|
||||
|
||||
## Literals
|
||||
### Literals
|
||||
|
||||
Aqua supports just a few literals: numbers, quoted strings, booleans. You [cannot init a structure](https://github.com/fluencelabs/aqua/issues/167) in Aqua, only obtain it as a result of a function call.
|
||||
|
||||
@ -79,7 +79,7 @@ bar(-1)
|
||||
bar(-0.2)
|
||||
```
|
||||
|
||||
## Getters
|
||||
### Getters
|
||||
|
||||
In Aqua, you can use a getter to peak into a field of a product or indexed element in an array.
|
||||
|
||||
@ -105,8 +105,9 @@ func foo(e: Example):
|
||||
|
||||
Note that the `!` operator may fail or halt:
|
||||
|
||||
* If it is called on an immutable collection, it will fail if the collection is shorter and has no given index; you can handle the error with [try](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/conditional.md#try) or [otherwise](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/conditional.md#otherwise).
|
||||
* If it is called on an appendable stream, it will wait for some parallel append operation to fulfill, see [Join behavior](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/parallel.md#join-behavior).
|
||||
* If it is called on an immutable collection, it will fail if the collection is shorter and has no given index; you can handle the error with [try](operators/conditional.md#try) or [otherwise](operators/conditional.md#otherwise).
|
||||
* If it is called on an appendable stream, it will wait for some parallel append operation to fulfill, see [Join behavior](operators/parallel.md#join-behavior).
|
||||
|
||||
|
||||
{% hint style="warning" %}
|
||||
The `!` operator can currently only be used with literal indices.
|
||||
@ -114,10 +115,11 @@ That is,`!2` is valid but`!x` is not valid.
|
||||
We expect to address this limitation soon.
|
||||
{% endhint %}
|
||||
|
||||
## Assignments
|
||||
### Assignments
|
||||
|
||||
Assignments, `=`, only give a name to a value with applied getter or to a literal.
|
||||
|
||||
|
||||
```text
|
||||
func foo(arg: bool, e: Example):
|
||||
-- Rename the argument
|
||||
@ -128,7 +130,7 @@ func foo(arg: bool, e: Example):
|
||||
c = "just string value"
|
||||
```
|
||||
|
||||
## Constants
|
||||
### Constants
|
||||
|
||||
Constants are like assignments but in the root scope. They can be used in all function bodies, textually below the place of const definition. Constant values must resolve to a literal.
|
||||
|
||||
@ -148,7 +150,7 @@ func bar():
|
||||
foo(setting)
|
||||
```
|
||||
|
||||
## Visibility scopes
|
||||
### Visibility scopes
|
||||
|
||||
Visibility scopes follow the contracts of execution flow.
|
||||
|
||||
@ -191,7 +193,7 @@ par y <- bar(x)
|
||||
baz(x, y)
|
||||
```
|
||||
|
||||
Recovery branches in [conditional flow](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/conditional.md) have no access to the main branch as the main branch exports values, whereas the recovery branch does not:
|
||||
Recovery branches in [conditional flow](operators/conditional.md) have no access to the main branch as the main branch exports values, whereas the recovery branch does not:
|
||||
|
||||
```text
|
||||
try:
|
||||
@ -203,9 +205,10 @@ otherwise:
|
||||
|
||||
-- y is not available below
|
||||
willFail(y)
|
||||
|
||||
```
|
||||
|
||||
## Streams as literals
|
||||
### Streams as literals
|
||||
|
||||
Stream is a special data structure that allows many writes. It has [a dedicated article](crdt-streams.md).
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user