GitBook: [alpha] 23 pages modified

This commit is contained in:
boneyard93501 2021-06-29 01:57:35 +00:00 committed by gitbook-bot
parent ba3407058f
commit 8e9f9e3ef2
No known key found for this signature in database
GPG Key ID: 07D2180C7B12D0FF
13 changed files with 107 additions and 126 deletions

View File

@ -2,19 +2,7 @@
[Aqua](https://github.com/fluencelabs/aqua), part of Fluence Lab's Aquamarine Web3 stack, is a purpose-built language to program peer-to-peer networks and compose distributed services hosted on peer-to-peer nodes into applications and backends.
In addition to the language specification, Aqua provides a compiler, which produces Aqua Intermediary Representation \(AIR\) and an execution stack, Aqua VM, that is part of every Fluence node implementation to execute AIR. Moreover, Aqua VM it a Wasm module that runs of ... and
### Help And Support
* [Discord](https://discord.gg/)
* [Telegram](https://t.me/fluence_project)
* [https://github.com/fluencelabs/aqua](https://github.com/fluencelabs/aqua)
In addition to the language specification, Aqua provides a compiler, which produces Aqua Intermediary Representation \(AIR\) and an execution stack, Aqua VM, that is part of every Fluence node implementation to execute AIR.

View File

@ -1,28 +1,5 @@
# Installation
Both the Aqua compiler and support library can be installed natively with `npm`
To install the compiler:
```bash
npm -g install @fluencelabs/aqua-cli
```
and to make the Aqua library available to Typescript applications:
```bash
npm -g install @fluencelabs/aqua-lib
```
Moreover, a VSCode syntax-highlighting extension is available. In VSCode, click on the Extensions button, search for `aqua`and install the extension.
* native tools
* devcontianer ?

View File

@ -3,7 +3,7 @@
Every Fluence reference node comes with a set of builtin services which are accessible to Aqua programs. Let's use those readily available services to get the timestamp of a few of our peer-to-peer neighborhood nodes with Aqua.
{% tabs %}
{% tab title="Peer Timestamps With Aqua" %}
{% tab title="Timestamps With Aqua" %}
```text
-- timestamp_getter.aqua
-- bring the builtin services into scope
@ -66,9 +66,9 @@ fldist run_air -p air-scripts/timestamp_getter.ts_getter.air -d '{"node":"12D3K
{% endtab %}
{% endtabs %}
In the Aqua code, we essentially create a workflow originating from our client peer to enumerate our neighbor peers from the Kademlia neighborhood based on our reference node specification, call on the builtin timestamp service on each peer and in parallel, join after we collect ten timestamps and return our u64 array of timestamps back to the client peer.
The Aqua script essentially creates a workflow originating from our client peer to enumerate our neighbor peers from the Kademlia neighborhood based on our reference node specification, calls on the builtin timestamp service on each peer in parallel, joins the results stream after we collect ten timestamps and return our u64 array of timestamps back to the client peer.
Once we have our file, lets copy it to a directory called aqua-scripts and create an empty directory. air-scripts. Now we can compile our aqua script with the aqua-cli tool and find our AIR file in the air-scripts directory:
See the [ts-oracle example](https://github.com/fluencelabs/examples/tree/main/ts-oracle) for the corresponding Aqua files in the `aqua-script` directory. Now that we have our script, let's compile it with the aqua-cli tool and find our AIR file in the air-scripts directory:
{% tabs %}
{% tab title="Compile" %}
@ -85,11 +85,13 @@ timestamp_getter.ts_getter.air
{% endtab %}
{% endtabs %}
Once we have our AIR file we can either use Typescript or command line client. Let's use the command line client `flidst`:
Once we have our AIR file we can either use a Typescript or command line client. Let's use the command line client `flidst`:
{% tabs %}
{% tab title="Run Air scripts" %}
```text
# if you don't have fldist on your machine:
# npm -g install @fluencelabs/fldist
# execute the AIR script from our compile phase with a peer id
fldist run_air -p air-scripts/timestamp_getter.ts_getter.air -d '{"node":"12D3KooWHLxVhUQyAuZe6AHMB29P7wkvTNMn7eDMcsqimJYLKREf"}' --generated
```
@ -116,5 +118,5 @@ fldist run_air -p air-scripts/timestamp_getter.ts_getter.air -d '{"node":"12D3K
{% endtab %}
{% endtabs %}
And that's it. We now have ten reasonable timestamps from our Kademlia neighbors. See the [ts-oracle example](https://github.com/fluencelabs/examples/tree/main/ts-oracle) for the corresponding Aqua and Air files, respectively.
And that's it. We now have ten timestamps right from our Kademlia neighbors.

View File

@ -9,17 +9,17 @@ func foo(): -- Comments are allowed almost everywhere
bar(5)
```
Values in Aqua have types, which are designated by a colon, `:`, as seen in function signature below. The type of a return, which is yielded when a function is executed, is denoted by an arrow pointing to the right `->` , whereas yielding is denoted by an arrow pointing to the left `<-`.
Values in Aqua have types, which are designated by a colon, `:`, as seen in function signature below. The type of a return, which is yielded when a function is executed, is denoted by an arrow pointing to the right `->` , whereas yielding is denoted by an arrow pointing to the left `<-`.
```text
-- Define a function that yields a string
func bar(arg: i16) -> string:
-- Call a function
smth(arg)
-- Yield a value from a function
x <- smth(arg)
-- Return a yielded results from a function
<- "return literal"
```
@ -52,3 +52,5 @@ Reference:
* [Expressions](expressions/)

View File

@ -1,12 +1,12 @@
# CRDT Streams
In Aqua, an ordinary value is a name that points to a single result:
In Aqua, ordinary value is a name that points to a single result:
```text
value <- foo()
```
A stream , on the other hand, is a name that points to zero or more results:
Stream is a name that points to a number of results \(zero or more\):
```text
value: *string
@ -14,7 +14,7 @@ value <- foo()
value <- foo()
```
Stream is a kind of [collection](types.md#collection-types) and can be used in place of other collections:
Stream is a kind of [collection](types.md#collection-types), and can be used where other collections are:
```text
func foo(peer: string, relay: ?string):
@ -34,15 +34,15 @@ func bar(peer: string, relay: string):
foo(peer, relayMaybe)
```
But the most powerful use of streams pertains to their use with parallel execution, which incurs non-determinism.
But the most powerful uses of streams come along with parallelism, which incurs non-determinism.
### Streams: Lifecycle And Guarantees
### Streams lifecycle and guarantees
A stream's lifecycle can be separated into three stages:
Streams lifecycle can be divided into three stages:
* Source: \(Parallel\) Writes to a stream
* Map: Handling the stream values
* Sink: Converting the resulting stream into a scalar
* Sink: Converting the resulting stream into scalar
Consider the following example:
@ -50,7 +50,7 @@ Consider the following example:
func foo(peers: []string) -> string:
resp: *string
-- Go to all peers in parallel
-- Will go to all peers in parallel
for p <- peers par:
on p:
-- Do something
@ -71,15 +71,15 @@ func foo(peers: []string) -> string:
```
In this case, for each peer in peers, something is going to be written into `resp` stream.
In this case, for each peer in peers, something is going to be written into resp stream.
Every peer `p` in peers does not know anything about how the other iterations proceed.
Every peer p in peers does not know anything about how the other iterations proceed.
Once something is written to `resp` stream, the second for is triggered. This is the mapping stage.
Once something is written to resp stream, the second for is triggered. It's the mapping stage.
And then the results are sent to the first peer, to call Op.identity there. This Op.identity waits until element number 5 is defined on `resp2` stream.
And then the results are sent to the first peer, to call Op.identity there. This Op.identity waits until element number 5 is defined on resp2 stream.
When the join is complete, the stream is consumed by the concatenation service to produce a scalar value, which is returned.
When it is, stream as a whole is consumed to produce a scalar value, which is returned.
During execution, involved peers have different views on the state of execution: each of the `for` parallel branches have no view or access to the other branches' data and eventually, the execution flows to the initial peer. The initial peer then merges writes to the `resp` stream and to the `resp2` stream, respectively. These writes are done in conflict-free fashion. Furthermore, the respective heads of the `resp`, `resp2` streams will not change from each peer's point of view as they are immutable and new values can only be appended. However, different peers may have a different order of the stream values depending on the order of receiving these values.
During execution, involved peers have different views on the state of execution: parallel branches of for have no access to each other's data. Finally, execution flows to the initial peer. Initial peer merges writes to the resp stream, and merges writes to the resp2 stream. It's done in conflict-free fashion. More than that, head of resp, resp2 streams will not change from each peer's point of view: it's immutable, and new values are only appended. However, different peers may have different order of the stream values, depending on the order of receiving these values.

View File

@ -16,7 +16,7 @@ service MySrv:
func do_something(): -- arrow of type: -> ()
MySrv "srv id"
MySrv.foo()
MySrv.foo()
```
* list all expressions

View File

@ -1,6 +1,6 @@
# Header
## Header expressions
### Header expressions
`import`
@ -10,7 +10,7 @@ The `import` expression brings everything defined within the imported file into
import "path/to/file.aqua"
```
The to be imported file path is first resolved relative to the source file path followed by checking for an `-imports` directories.
The to be imported file path is first resolved relative to the source file path followed by checking for an `-imports` directories.
See [Imports & Exports](../statements-1.md) for details.

View File

@ -4,6 +4,8 @@ description: Static configuration pieces that affect compilation
# Overrideable constants
`const`
Constant definition.

View File

@ -14,7 +14,7 @@ Services that are a part of the protocol, i.e. are available from the peer node,
service Peer("peer"):
foo() -- no arguments, no return
bar(i: bool) -> bool
func usePeer() -> bool:
Peer.foo() -- results in a call of service "peer", function "foo", on current peer ID
z <- Peer.bar(true)
@ -27,7 +27,7 @@ Example of a custom service:
service MyService:
foo()
bar(i: bool, z: i32) -> string
func useMyService(k: i32) -> string:
-- Need to tell the compiler what does "my service" mean in this scope
MyService "my service id"
@ -36,7 +36,7 @@ func useMyService(k: i32) -> string:
-- Need to redefine MyService in scope of this peer as well
MyService "another service id"
z <- MyService.bar(false, k)
<- z
<- z
```
Service definitions have types. Type of a service is a product type of arrows. See [Types](../types.md#type-of-a-service-and-a-file).

View File

@ -126,7 +126,7 @@ What will happen when execution comes to `baz`?
Actually, the script will be executed twice: first time it will be sent from `peer1`, and second time from `peer2`. Or another way round: `peer2` then `peer1`, we don't know who is faster.
When execution will get to `baz` for the first time, [Aqua VM]() will realize that it lacks some data that is expected to be computed above in the parallel branch. And halt.
When execution will get to `baz` for the first time, [Aqua VM](../../runtimes/aqua-vm.md) will realize that it lacks some data that is expected to be computed above in the parallel branch. And halt.
After the second branch executes, VM will be woken up again, reach the same piece of code and realize that now it has enough data to proceed.

View File

@ -4,11 +4,12 @@ description: Define where the code is to be executed and how to get there
# Topology
Aqua lets developers to describe the whole distributed workflow in a single script, link data, recover from errors, implement complex patterns like backpressure, and more. Hence, topology is at the heart of Aqua.
Topology in Aqua is declarative: You just need to say where a piece of code must be executed, on what peer, and optionally how to get there. he Aqua compiler will add all the required network hops.
Aqua lets developers to describe the whole distributed workflow in a single script, link data, recover from errors, implement complex patterns like backpressure, and more. Hence, topology is at the heart of Aqua.
## On expression
Topology in Aqua is declarative: You just need to say where a piece of code must be executed, on what peer, and optionally how to get there. he Aqua compiler will add all the required network hops.
### On expression
`on` expression moves execution to the specified peer:
@ -27,15 +28,15 @@ on myPeer:
baz()
```
## `%init_peer_id%`
### `%init_peer_id%`
There is one custom peer ID that is always in scope: `%init_peer_id%`. It points to the peer that initiated this request.
There is one custom peer ID that is always in scope: `%init_peer_id%`. It points to the peer that initiated this request.
{% hint style="warning" %}
Using `on %init_peer_id%` is an anti-pattern: There is no way to ensure that init peer is accessible from the currently used part of the network.
{% endhint %}
## More complex scenarios
### More complex scenarios
Consider this example:
@ -43,16 +44,16 @@ Consider this example:
func foo():
on "peer foo":
do_foo()
func bar(i: i32):
do_bar()
func baz():
bar(1)
on "peer baz":
foo()
bar(2)
bar(3)
bar(3)
```
Take a minute to think about:
@ -69,7 +70,7 @@ Declarative topology definition always works the same way.
* `bar(2)` is executed on `"peer baz"`, despite the fact that foo does topologic transition. `bar(2)` is in the scope of `on "peer baz"`, so it will be executed there
* `bar(3)` is executed where `bar(1)` was: in the root scope of `baz`, wherever it was called from
## Accessing peers `via` other peers
### Accessing peers `via` other peers
In a distributed network it is quite common that a peer is not directly accessible. For example, a browser has no public network interface and you cannot open a socket to a browser at will. Such constraints warrant a `relay` pattern: there should be a well-connected peer that relays requests from a peer to the network and vice versa.
@ -80,12 +81,12 @@ Relays are handled with `via`:
-- the compiler will add an additional hop to some relay
on "some peer" via "some relay":
foo()
-- More complex path: first go to relay2, then to relay1,
-- then to peer. When going out of peer, do it in reverse
on "peer" via relay1 via relay2:
foo()
-- You can pass any collection of strings to relay,
-- and it will go through it if it's defined,
-- or directly if not
@ -136,7 +137,7 @@ foo()
When the `on` scope is ended, it does not affect any further topology moves. Until you stop indentation, `on` affects the topology and may add additional topology moves, which means more roundtrips and unnecessary latency.
## Callbacks
### Callbacks
What if you want to return something to the initial peer? For example, implement a request-response pattern. Or send a bunch of requests to different peers, and render responses as they come, in any order.
@ -149,7 +150,7 @@ func run(updateModel: Model -> (), logMessage: string -> ()):
updateModel(m)
par on "other peer":
x <- getMessage()
logMessage(x)
logMessage(x)
```
Callbacks have the [arrow type](types.md#arrow-types).
@ -160,15 +161,15 @@ If you pass just ordinary functions as arrow-type arguments, they will work as i
func foo():
on "peer 1":
doFoo()
func bar(cb: -> ()):
on "peer2":
cb()
func baz():
-- foo will go to peer 1
-- bar will go to peer 2
bar(foo)
bar(foo)
```
If you pass a service call as a callback, it will be executed locally on the node where you called it. That might change.
@ -191,7 +192,7 @@ func baz():
Passing service function calls as arguments is very fragile as it does not track that a service is resolved in the scope of the call. Abilities variance may fix that.
{% endhint %}
## Parallel execution and topology
### Parallel execution and topology
When blocks are executed in parallel, it is not always necessary to resolve the topology to get to the next peer. The compiler will add topologic hops from the par branch only if data defined in that branch is used down the flow.
@ -199,3 +200,5 @@ When blocks are executed in parallel, it is not always necessary to resolve the
What if all branches do not return? Execution will halt. Be careful, use `co` if you don't care about the returned data.
{% endhint %}

View File

@ -1,6 +1,6 @@
# Types
## Scalars
### Scalars
Scalar types follow the Wasm IT notation.
@ -12,24 +12,24 @@ Scalar types follow the Wasm IT notation.
* Records \(product type\): see below
* Arrays: see Collection Types below
## Literals
### Literals
You can pass booleans \(true, false\), numbers, double-quoted strings as literals.
## Products
### Products
```python
data ProductName:
field_name: string
data OtherProduct:
product: ProductName
flag: bool
flag: bool
```
Fields are accessible with the dot operator `.` , e.g. `product.field`.
Fields are accessible with the dot operator `.` , e.g. `product.field`.
## Collection Types
### Collection Types
Aqua has three different types with variable length, denoted by quantifiers `[]`, `*`, and `?`.
@ -41,6 +41,7 @@ Appendable collection with 0..N values: `*`
Any data type can be prepended with a quantifier, e.g. `*u32`, `[][]string`, `?ProductType` are all correct type specifications.
You can access a distinct value of a collection with `!` operator, optionally followed by an index.
Examples:
@ -59,7 +60,7 @@ maybe_value: ?string
value = maybe_value!
```
## Arrow Types
### Arrow Types
Every function has an arrow type that maps a list of input types to an optional output type.
@ -81,7 +82,7 @@ arrow()
x <- arrow()
```
## Type Alias
### Type Alias
For convenience, you can alias a type:
@ -89,12 +90,13 @@ For convenience, you can alias a type:
alias MyAlias = ?string
```
## Type Variance
### Type Variance
Aqua is made for composing data on the open network. That means that you want to compose things if they do compose, even if you don't control its source code.
Therefore Aqua follows the structural typing paradigm: if a type contains all the expected data, then it fits. For example, you can pass `u8` in place of `u16` or `i16`. Or `?bool` in place of `[]bool`. Or `*string` instead of `?string` or `[]string`. The same holds for products.
For arrow types, Aqua checks the variance on arguments and contravariance on the return type.
```text
@ -128,17 +130,17 @@ bar(foo4)
Arrow type `A: D -> C` is a subtype of `A1: D1 -> C1`, if `D1` is a subtype of `D` and `C` is a subtype of `C1`.
## Type Of A Service And A File
### Type Of A Service And A File
A service type is a product of arrows.
```text
service MyService:
foo(arg: string) -> bool
-- type of this service is:
data MyServiceType:
foo: string -> bool
foo: string -> bool
```
The file is a product of all defined constants and functions \(treated as arrows\). Type definitions in the file do not go to the file type.
@ -148,14 +150,16 @@ The file is a product of all defined constants and functions \(treated as arrows
func foo(arg: string) -> bool:
...
const flag ?= true
-- type of MyFile.aqua
data MyServiceType:
foo: string -> bool
flag: bool
flag: bool
```
{% embed url="https://github.com/fluencelabs/aqua/blob/main/types/src/main/scala/aqua/types/Type.scala" caption="See the types system implementation" %}

View File

@ -18,7 +18,7 @@ on "peer 1":
More on that in the Security section. Now let's see how we can work with values inside the language.
## Arguments
### Arguments
Function arguments are available within the whole function body.
@ -26,14 +26,14 @@ Function arguments are available within the whole function body.
func foo(arg: i32, log: string -> ()):
-- Use data arguments
bar(arg)
-- Arguments can have arrow type and be used as strings
log("Wrote arg to responses")
```
## Return values
### Return values
You can assign results of an arrow call to a name, and use this returned value in the code below.
You can assign results of an arrow call to a name, and use this returned value in the code below.
```text
-- Imagine a Stringify service that's always available
@ -47,7 +47,7 @@ func bar(arg: i32) -> string:
-- Starting from there, you can use x
-- Pass x out of the function scope as the return value
<- x
func foo(arg: i32, log: *string):
-- Use bar to convert arg to string, push that string
@ -55,7 +55,7 @@ func foo(arg: i32, log: *string):
log <- bar(arg)
```
## Literals
### Literals
Aqua supports just a few literals: numbers, quoted strings, booleans. You [cannot init a structure](https://github.com/fluencelabs/aqua/issues/167) in Aqua, only obtain it as a result of a function call.
@ -67,7 +67,7 @@ foo("double quoted string literal")
-- Booleans are true or false
if x == false:
foo("false is a literal")
-- Numbers are different
-- Any number:
bar(1)
@ -79,9 +79,9 @@ bar(-1)
bar(-0.2)
```
## Getters
### Getters
In Aqua, you can use a getter to peak into a field of a product or indexed element in an array.
In Aqua, you can use a getter to peak into a field of a product or indexed element in an array.
```text
data Sub:
@ -91,7 +91,7 @@ data Example:
field: u32
arr: []Sub
child: Sub
func foo(e: Example):
bar(e.field) -- u32
bar(e.child) -- Sub
@ -100,13 +100,14 @@ func foo(e: Example):
bar(e.arr!) -- gets the 0 element
bar(e.arr!.sub) -- string
bar(e.arr!2) -- gets the 2nd element
bar(e.arr!2.sub) -- string
bar(e.arr!2.sub) -- string
```
Note that the `!` operator may fail or halt:
* If it is called on an immutable collection, it will fail if the collection is shorter and has no given index; you can handle the error with [try](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/conditional.md#try) or [otherwise](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/conditional.md#otherwise).
* If it is called on an appendable stream, it will wait for some parallel append operation to fulfill, see [Join behavior](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/parallel.md#join-behavior).
* If it is called on an immutable collection, it will fail if the collection is shorter and has no given index; you can handle the error with [try](operators/conditional.md#try) or [otherwise](operators/conditional.md#otherwise).
* If it is called on an appendable stream, it will wait for some parallel append operation to fulfill, see [Join behavior](operators/parallel.md#join-behavior).
{% hint style="warning" %}
The `!` operator can currently only be used with literal indices.
@ -114,10 +115,11 @@ That is,`!2` is valid but`!x` is not valid.
We expect to address this limitation soon.
{% endhint %}
## Assignments
### Assignments
Assignments, `=`, only give a name to a value with applied getter or to a literal.
```text
func foo(arg: bool, e: Example):
-- Rename the argument
@ -128,7 +130,7 @@ func foo(arg: bool, e: Example):
c = "just string value"
```
## Constants
### Constants
Constants are like assignments but in the root scope. They can be used in all function bodies, textually below the place of const definition. Constant values must resolve to a literal.
@ -148,7 +150,7 @@ func bar():
foo(setting)
```
## Visibility scopes
### Visibility scopes
Visibility scopes follow the contracts of execution flow.
@ -159,7 +161,7 @@ Functions have isolated scopes:
```text
func foo():
a = 5
func bar():
-- a is not defined in this function scope
a = 7
@ -174,9 +176,9 @@ func foo():
for y <- ys:
-- Can use what was defined above
z <- bar(x)
-- z is not defined in scope
z = 7
z = 7
```
[Parallel](flow/parallel.md#join-behavior) branches have [no access](https://github.com/fluencelabs/aqua/issues/90) to each other's data:
@ -191,7 +193,7 @@ par y <- bar(x)
baz(x, y)
```
Recovery branches in [conditional flow](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/conditional.md) have no access to the main branch as the main branch exports values, whereas the recovery branch does not:
Recovery branches in [conditional flow](operators/conditional.md) have no access to the main branch as the main branch exports values, whereas the recovery branch does not:
```text
try:
@ -200,12 +202,13 @@ otherwise:
-- this is not possible will fail
bar(x)
y <- baz()
-- y is not available below
willFail(y)
willFail(y)
```
## Streams as literals
### Streams as literals
Stream is a special data structure that allows many writes. It has [a dedicated article](crdt-streams.md).
@ -222,13 +225,13 @@ par resp <- bar()
for x <- xs:
-- Write to a stream that's defined above
resp <- baz()
try:
resp <- baz()
otherwise:
on "other peer":
resp <- baz()
-- Now resp can be used in place of arrays and optional values
-- assume fn: []string -> ()
fn(resp)
@ -236,7 +239,7 @@ fn(resp)
-- Can call fn with empty stream: you can use it
-- to construct empty values of any collection types
nilString: *string
fn(nilString)
fn(nilString)
```
One of the most frequently used patterns for streams is [Conditional return](flow/conditional.md#conditional-return).