reset gitbook sync

This commit is contained in:
boneyard93501 2022-04-08 17:38:58 -05:00
parent 9efda4760e
commit 12c7a988fd
38 changed files with 0 additions and 2086 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 982 KiB

View File

@ -1,23 +0,0 @@
# Introduction
Fluence is an open protocol and a framework for internet or private cloud applications. Fluence provides a peer-to-peer development stack so that you can create applications free of proprietary cloud platforms, centralized APIs, and untrustworthy third-parties. The Fluence stack is open source and is maintained and governed by a community of developers.
At the core of Fluence is the open-source language **Aqua** that allows for the programming of peer-to-peer scenarios separately from the computations on peers. Applications are turned into hostless workflows over distributed function calls, which enables various levels of decentralization: from handling by a limited set of servers to complete peer-to-peer architecture by connecting user devices directly.
{% embed url="https://youtu.be/M\_u-EnWrMOQ" %}
This book is dedicated to all things Aqua and currently in its **alpha** version and we expect to expand both Aqua's breadth and depth coverage over the coming weeks.
Stay in touch or contact us via the following channels:
* [Discord](https://discord.gg/)
* [Telegram](https://t.me/fluence_project)
* [Aqua Github](https://github.com/fluencelabs/aqua)
* [Youtube](https://www.youtube.com/channel/UC3b5eFyKRFlEMwSJ1BTjpbw)

View File

@ -1,26 +0,0 @@
# Table of contents
* [Introduction](README.md)
* [Getting Started](getting-started/README.md)
* [Installation](getting-started/installation.md)
* [Quick Start](getting-started/quick-start.md)
* [Language](language/README.md)
* [Basics](language/basics.md)
* [Types](language/types.md)
* [Values](language/variables.md)
* [Topology](language/topology.md)
* [Execution flow](language/flow/README.md)
* [Sequential](language/flow/sequential.md)
* [Conditional](language/flow/conditional.md)
* [Parallel](language/flow/parallel.md)
* [Iterative](language/flow/iterative.md)
* [Abilities & Services](language/abilities-and-services.md)
* [CRDT Streams](language/crdt-streams.md)
* [Imports And Exports](language/statements-1.md)
* [Expressions](language/expressions/README.md)
* [Header](language/expressions/header.md)
* [Functions](language/expressions/functions.md)
* [Services](language/expressions/services.md)
* [Type definitions](language/expressions/type-definitions.md)
* [Overrideable constants](language/expressions/overrideable-constants.md)

View File

@ -1,2 +0,0 @@
# Appendix

View File

@ -1,4 +0,0 @@
# Aqua Patterns
* maybe move to tutorials/examples/recipe section of general documentation ?

View File

@ -1,2 +0,0 @@
# Aqua VM

View File

@ -1,2 +0,0 @@
# Foundations: π-calculus

View File

@ -1,6 +0,0 @@
# Getting Started
[Aqua](https://github.com/fluencelabs/aqua), part of Fluence Lab's Aquamarine Web3 stack, is a purpose-built language to program peer-to-peer networks and compose distributed services hosted on peer-to-peer nodes into applications and backends.
In addition to the language specification, Aqua provides a compiler, which produces Aqua Intermediary Representation \(AIR\) and an execution stack, Aqua VM, that is part of every Fluence node implementation to execute AIR.

View File

@ -1,4 +0,0 @@
# Debugging And Testing
* any plans or drop this section?

View File

@ -1,30 +0,0 @@
# Installation
Both the Aqua compiler and support library can be installed natively with `npm`
To install the compiler:
```bash
npm -g install @fluencelabs/aqua-cli
```
and to make the Aqua library available to Typescript applications:
```bash
npm -g install @fluencelabs/aqua-lib
```
Moreover, a VSCode syntax-highlighting extension is available. In VSCode, click on the Extensions button, search for `aqua`and install the extension.
![aqua extension for VSCode](../.gitbook/assets/screen-shot-2021-06-29-at-1.06.39-pm.png)

View File

@ -1,101 +0,0 @@
# Quick Start
Every Fluence reference node comes with a set of builtin services that are accessible to Aqua programs. Let's use those readily available services to get the timestamp of a few of our peer-to-peer neighbourhood nodes with Aqua.
{% tabs %}
{% tab title="Timestamps With Aqua" %}
```text
-- timestamp_getter.aqua
-- bring the builtin services into scope
import "builtin.aqua"
-- create and identity service to join our results
service Op2("op"):
identity(s: u64)
array(a: string, b: u64) -> string
-- function to get ten timestamps from our Kademlia
-- neighborhood peers and return as an array of u64 timestamps
-- the function arguement node is our peer id
func ts_getter(node: string) -> []u64:
-- create a streaming variable
res: *[]u64
-- execute on the pecified peer
on node:
-- get the base58 representation of the peer id
k <- Op.string_to_b58(node)
-- find all (default 20) neighborhood peers from k
nodes <- Kademlia.neighborhood(k, false)
-- for each peer in our neighborhood and in parallel
for n <- nodes par:
on n:
-- try and get the peer's timestamp
try:
res <- Peer.timestamp_ms()
-- flatten nine of our joined results
Op2.identity(res!9)
-- return an array of ten timestamps
<- res
```
{% endtab %}
{% endtabs %}
The Aqua script essentially creates a workflow originating from our client peer to enumerate neighbor peers for our reference node, calls on the builtin timestamp service on each peer in parallel, joins the results stream after we collect ten timestamps and return our u64 array of timestamps back to the client peer.
See the [ts-oracle example](https://github.com/fluencelabs/examples/tree/main/ts-oracle) for the corresponding Aqua files in the `aqua-script` directory. Now that we have our script, let's compile it with the `aqua-cli` tool and find our AIR file in the `air-scripts` directory:
{% tabs %}
{% tab title="Compile" %}
```bash
aqua-cli -i aqua-scripts -o air-scripts -a
```
{% endtab %}
{% tab title="Result" %}
```bash
# in the air-script dir you should have the following file
timestamp_getter.ts_getter.air
```
{% endtab %}
{% endtabs %}
Once we have our AIR file we can either use a Typescript or command line client. Let's use the command line client `flidst`see third tab for installation instructions, if needed:
{% tabs %}
{% tab title="Run Air scripts" %}
```text
# execute the AIR script from our compile phase with a peer id
fldist run_air -p air-scripts/timestamp_getter.ts_getter.air -d '{"node":"12D3KooWHLxVhUQyAuZe6AHMB29P7wkvTNMn7eDMcsqimJYLKREf"}' --generated
```
{% endtab %}
{% tab title="Result" %}
```
# here we go: ten timestamps in micro seconds obtained in parallel
[
[
1624928596292,
1624928596291,
1624928596291,
1624928596299,
1624928596295,
1624928596286,
1624928596295,
1624928596284,
1624928596293,
1624928596289
]
]
```
{% endtab %}
{% tab title="Installing fldist" %}
```
# if you don't have fldist on your machine:
npm -g install @fluencelabs/fldist
```
{% endtab %}
{% endtabs %}
And that's it. We now have ten timestamps right from our selected peer's neighbors.

View File

@ -1,7 +0,0 @@
# Tools
* Compiler
* Virtual Machine
may not need a section here but could add an Appendix section for each, if needed

View File

@ -1 +0,0 @@
# Language

View File

@ -1,69 +0,0 @@
# Abilities & Services
While [Execution flow](flow/) organizes the flow from peer to peer, Abilities & Services describe what exactly can be called on these peers, and how to call it.
Ability is a concept of "what is possible in this context": like a peer-specific trait or a typeclass. It will be better explained once abilities passing is implemented.
{% embed url="https://github.com/fluencelabs/aqua/issues/33" caption="" %}
## Services
A Service interfaces functions \(often provided via WebAssembly interface\) executable on a peer. Example of service definition:
```haskell
service MyService:
foo(arg: string) -> string
bar() -> bool
baz(arg: i32)
```
Service functions in Aqua have no function body. Computations, of any complexity, are implemented with any programming language that fits, and then brought to the Aqua execution context. Aqua calls these functions but does not peak into what's going on inside.
#### Built-in Services
Some services may be singletons available on all peers. Such services are called built-ins, and are always available in any scope.
```haskell
-- Built-in service has a constant ID, so it's always resolved
service Op("op"):
noop()
func foo():
-- Call the noop function of "op" service locally
Op.noop()
```
#### Service Resolution
A peer may host many services of the same type. To distinguish services from each other, Aqua requires Service resolution to be done: that means, the developer must provide an ID of the service to be used on the peer.
```haskell
service MyService:
noop()
func foo():
-- Will fail
MyService.noop()
-- Resolve MyService: it has id "noop"
MyService "noop"
-- Can use it now
MyService.noop()
on "other peer":
-- Should fail: we haven't resolved MyService ID on other peer
MyService.noop()
-- Resolve MyService on peer "other peer"
MyService "other noop"
MyService.noop()
-- Moved back to initial peer, here MyService is resolved to "noop"
MyService.noop()
```
There's no way to call an external function in Aqua without defining all the data types and the service type. One of the most convenient ways to do it is to generate Aqua types from Wasm code in Marine.

View File

@ -1,54 +0,0 @@
# Basics
Aqua is an opinionated domain-specific language. It's structured with significant indentation.
```haskell
-- Comments begin with double-dash and end with the line (inline)
func foo(): -- Comments are allowed almost everywhere
-- Body of the block expression is indented
bar(5)
```
Values in Aqua have types, which are designated by a colon, `:`, as seen in the function signature below. The type of a return, which is yielded when a function is executed, is denoted by an arrow pointing to the right `->` , whereas yielding is denoted by an arrow pointing to the left `<-`.
```haskell
-- Define a function that yields a string
func bar(arg: i16) -> string:
-- Call a function
smth(arg)
-- Yield a value from a function
x <- smth(arg)
-- Return a yielded results from a function
<- "return literal"
```
Subsequent sections explain the main parts of Aqua.
Data:
* [Types](types.md)
* [Values of that types](variables.md)
Execution:
* [Topology](topology.md) how to express where the code should be executed
* [Execution flow](flow/) control structures
Computations:
* [Abilities & Services](abilities-and-services.md)
Advanced parallelism:
* [CRDT Streams](crdt-streams.md)
Code management:
* [Imports & exports](statements-1.md)
Reference:
* [Expressions](expressions/)

View File

@ -1,83 +0,0 @@
# CRDT Streams
In Aqua, an ordinary value is a name that points to a single result:
```haskell
value <- foo()
```
A stream , on the other hand, is a name that points to zero or more results:
```haskell
value: *string
value <- foo()
value <- foo()
```
Stream is a kind of [collection](types.md#collection-types) and can be used in place of other collections:
```haskell
func foo(peer: string, relay: ?string):
on peer via relay:
Op.noop()
-- Dirty hack for lack of type variance, and lack of cofunctors
service OpStr("op"):
identity: string -> string
func bar(peer: string, relay: string):
relayMaybe: *string
if peer != %init_peer_id%:
-- To write into a stream, function call is required
relayMaybe <- OpStr.identity(relay)
-- Pass a stream as an optional value
foo(peer, relayMaybe)
```
But the most powerful use of streams pertains to their use with parallel execution, which incurs non-determinism.
### Streams: Lifecycle And Guarantees
A stream's lifecycle can be separated into three stages:
* Source: \(Parallel\) Writes to a stream
* Map: Handles the stream values
* Sink: Converts the resulting stream into a scalar
Consider the following example:
```haskell
func foo(peers: []string) -> string:
resp: *string
-- Will go to all peers in parallel
for p <- peers par:
on p:
-- Do something
resp <- Srv.call()
resp2: *string
-- What is resp at this point?
for r <- resp par:
on r:
resp2 <- Srv.call()
-- Wait for 6 responses
Op.identity(resp2!5)
-- Once we have 5 responses, merge them
r <- Srv.concat(resp2)
<- r
```
In this case, for each peer in peers, something is going to be written into `resp` stream.
Every peer `p` in peers does not know anything about how the other iterations proceed.
Once something is written to `resp` stream, the second for is triggered. This is the mapping stage.
And then the results are sent to the first peer, to call Op.identity there. This Op.identity waits until element number 5 is defined on `resp2` stream.
When the join is complete, the stream is consumed by the concatenation service to produce a scalar value, which is returned.
During execution, involved peers have different views on the state of execution: each of the `for` parallel branches have no view or access to the other branches' data and eventually, the execution flows to the initial peer. The initial peer then merges writes to the `resp` stream and to the `resp2` stream, respectively. These writes are done in conflict-free fashion. Furthermore, the respective heads of the `resp`, `resp2` streams will not change from each peer's point of view as they are immutable and new values can only be appended. However, different peers may have a different order of the stream values depending on the order of receiving these values.

View File

@ -1,16 +0,0 @@
# Expressions
Aqua file consists of a header and a body.
###
### Body expressions
`func`
Function definition is the most crucial part of the language, see [Functions](functions.md) for details.
\`\`
{% embed url="https://github.com/fluencelabs/aqua/tree/main/semantics/src/main/scala/aqua/semantics/expr" caption="Expressions source code" %}

View File

@ -1,30 +0,0 @@
# Functions
A function in Aqua is a block of code expressing a workflow, i.e., a coordination scenario that works across one or more peers.
A function may take arguments of any type and may return a value.
A function can call other functions, take functions as arguments of [arrow type](../types.md#arrow-types) and be provided as an arrow argument.
Essentially, a function is an arrow. The function call is an expression that connects named arguments to an arrow, and gives a name to the result.
Finally, all a function does is call its arguments or service functions.
```haskell
service MySrv:
foo()
func do_something(): -- arrow of type: -> ()
MySrv "srv id"
MySrv.foo()
```
{% hint style="warning" %}
TODO
* list all expressions
* for each, explain the contract and provide a use case
{% endhint %}

View File

@ -1,16 +0,0 @@
# Header
## Header expressions
### `import`
The `import` expression brings everything defined within the imported file into the scope.
```haskell
import "path/to/file.aqua"
```
The to be imported file path is first resolved relative to the source file path followed by checking for an `-imports` directories.
See [Imports & Exports](../statements-1.md) for details.

View File

@ -1,22 +0,0 @@
---
description: Static configuration pieces that affect compilation
---
# Overrideable constants
### `const`
Constant definition.
Constants can be used all across functions, exported and imported. If a constant is defined using `?=` , it can be overriden by value via compiler flags or imported values.
```haskell
-- This can be overriten with -const "target_peer_id = \"other peer id\""
const target_peer_id ?= "this is a target peer id"
-- This constant cannot be overriden
const service_id = "service id"
```
You can assign only literals to constants. Constant type is the same as literal type. You can override only with a subtype of that literal type.

View File

@ -1,43 +0,0 @@
# Services
### `service`
Service definition.
A service is a program running on a peer. Every service has an interface that consists of a list of functions. To call a service, the service must be identified: so, every service has an ID that must be resolved in the peer scope.
In the service definition, you enumerate all the functions, their names, argument, and return types, and optionally provide the default Service ID.
Services that are a part of the protocol, i.e. are available from the peer node, come along with IDs. Example of predefined service:
```haskell
service Peer("peer"):
foo() -- no arguments, no return
bar(i: bool) -> bool
func usePeer() -> bool:
Peer.foo() -- results in a call of service "peer", function "foo", on current peer ID
z <- Peer.bar(true)
<- z
```
Example of a custom service:
```haskell
service MyService:
foo()
bar(i: bool, z: i32) -> string
func useMyService(k: i32) -> string:
-- Need to tell the compiler what does "my service" mean in this scope
MyService "my service id"
MyService.foo()
on "another peer id":
-- Need to redefine MyService in scope of this peer as well
MyService "another service id"
z <- MyService.bar(false, k)
<- z
```
Service definitions have types. Type of a service is a product type of arrows. See [Types](../types.md#type-of-a-service-and-a-file).

View File

@ -1,24 +0,0 @@
# Type definitions
### `data`
[Product type](../types.md#products) definition. See [Types](../types.md) for details.
```haskell
data SomeType:
fieldName: FieldType
otherName: OtherType
third: []u32
```
### `alias`
Aliasing a type to a name.
It may help with self-documented code and refactoring.
```haskell
alias PeerId: string
alias MyDomain: DomainType
```

View File

@ -1,12 +0,0 @@
# Execution flow
Aqua's main goal is to express how the execution flows: moves from peer to peer, forks to parallel flows and then joins back, uses data from one step in another.
As the foundation of Aqua is based on π-calculus, finally flow is decomposed into [sequential](sequential.md) \(`seq`, `.`\), [conditional](conditional.md) \(`xor`, `+`\), [parallel](parallel.md) \(`par`, `|`\) computations and [iterations](iterative.md) based on data \(`!P`\). For each basic way to organize the flow, Aqua follows a set of rules to execute the operations:
* What data is available for use?
* What data is exported and can be used below?
* How errors and failures are handled?
These rules form a contract, as in [design-by-contract](https://en.wikipedia.org/wiki/Design_by_contract) programming.

View File

@ -1,123 +0,0 @@
# Conditional
Aqua supports branching: you can return one value or another, recover from the error, or check a boolean expression.
## Contract
* The second arm of the conditional operator is executed if and only if the first arm failed.
* The second arm has no access to the first arm's data.
* A conditional block is considered "executed" if and only if any arm was executed successfully.
* A conditional block is considered "failed" if and only if the second \(recovery\) arm fails to execute.
## Conditional operations
### try
Tries to perform operations, or swallows the error \(if there's no catch, otherwise after the try block\).
```haskell
try:
-- If foo fails with an error, execution will continue
-- You should write your logic in a non-blocking fashion:
-- If your code below depends on `x`, it may halt as `x` is not resolved.
-- See Conditional return below for workaround
x <- foo()
```
### catch
Catches the standard error from `try` block.
```haskell
try:
foo()
catch e:
logError(e)
```
Type of `e` is:
```haskell
data LastError:
instruction: string -- What AIR instruction failed
msg: string -- Human-readable error message
peer_id: string -- On what peer the error happened
```
### if
If corresponds to `match`, `mismatch` extension of π-calculus.
```haskell
x = true
if x:
-- always executed
foo()
if x == false:
-- never executed
bar()
if x != false:
-- executed
baz()
```
Currently, you may only use one `==`, `!=` operator in the `if` expression, or compare with true.
Both operands can be variables.
### else
Just the second branch of `if`, in case the condition does not hold.
```haskell
if true:
foo()
else:
bar()
```
If you want to set a variable based on condition, see Conditional return.
### otherwise
You may add `otherwise` to provide recovery for any block or expression:
```haskell
x <- foo()
otherwise:
-- if foo can't be executed, then do bar()
y <- bar()
```
## Conditional return
In Aqua, functions may have only one return expression, which is very last. And conditional expressions cannot define the same variable:
```haskell
try:
x <- foo()
otherwise:
x <- bar() -- Error: name x was already defined in scope, can't compile
```
So to get the value based on condition, we need to use a [writeable collection](../types.md#collection-types).
```haskell
-- result may have 0 or more values of type string, and is writeable
resultBox: *string
try:
resultBox <- foo()
otherwise:
resultBox <- bar()
-- now result contains only one value, let's extract it!
result = resultBox!
-- Type of result is string
-- Please note that if there were no writes to resultBox,
-- the first use of result will halt.
-- So you need to be careful about it and ensure that there's always a value.
```

View File

@ -1,83 +0,0 @@
# Iterative
π-calculus has a notion of the repetitive process: `!P = P | !P`. That means, you can always fork a new `P` process if you need it.
In Aqua, two operations correspond to it: you can call a service function \(it's just available when it's needed\), and you can use `for` loop to iterate on collections.
### `for` expression
In short, `for` looks like the following:
```haskell
xs: []string
for x <- xs:
y <- foo(x)
-- x and y are not accessible there, you can even redefine them
x <- bar()
y <- baz()
```
## Contract
* Iterations of `for` loop are executed sequentially by default.
* Variables defined inside `for` loop are not available outside.
* `for` loop's code has access to all variables above.
* `for` can be executed on a variable of any [Collection type](../types.md#collection-types).
### Conditional `for`
For can be executed on a variable of any [Collection type](../types.md#collection-types).
You can make several trials in a loop, and break once any trial succeeded.
```haskell
xs: []string
for x <- xs try:
-- Will stop trying once foo succeeds
foo(x)
```
The contract is changed as in [Parallel](parallel.md#contract) flow.
### Parallel `for`
Running many operations in parallel is the most commonly used pattern for `for`.
```text
xs: []string
for x <- xs par:
on x:
foo()
-- Once the fastest x succeeds, execution continues
-- If you want to make the subsequent execution independent from for,
-- mark it with par, e.g.:
par continueWithBaz()
```
The contract is changed as in [Conditional](conditional.md#contract) flow.
### Export data from `for`
The way to export data from `for` is the same as in [Conditional return](conditional.md#conditional-return) and [Race patterns](parallel.md#join-behavior).
```haskell
xs: []string
return: *string
-- can be par, try, or nothing
for x <- xs par:
on x:
return <- foo()
-- Wait for 6 fastest results -- see Join behavior
baz(return!5, return)
```
### `for` on streams
`for` on streams is one of the most advanced and powerful parts of Aqua. See [CRDT streams](../crdt-streams.md) for details.

View File

@ -1,147 +0,0 @@
# Parallel
Parallel execution is where Aqua fully shines.
## Contract
* Parallel arms have no access to each other's data. Sync points must be explicit \(see [Join behavior](parallel.md#join-behavior)\).
* If any arm is executed successfully, the flow execution continues.
* All the data defined in parallel arms is available in the subsequent code.
## Implementation limitation
Parallel execution has some implementation limitations:
* Parallel means independent execution on different peers
* No parallelism when executing a script on a single peer
* No concurrency in services: every service instance does only one job simultaneously.
* Keep services small in terms of computation and memory \(WebAssembly limitation\)
These limitations might be overcome in future Aqua updates, but for now, plan your application design having this in mind.
## Parallel operations
#### par
`par` syntax is derived from π-calculus notation of parallelism: `A | B`
```haskell
-- foo and bar will be executed in parallel, if possible
foo()
par bar()
-- It's useful to combine `par` with `on` block,
-- to delegate further execution to different peers.
-- In this case execution will continue on two peers, independently
on "peer 1":
x <- foo()
par on "peer 2":
y <- bar()
-- Once any of the previous functions return x or y,
-- execution continues. We don't know the order, so
-- if y is returned first, hello(x) will not execute
hello(x)
hello(y)
-- You can fix it with par
-- What's comes faster, will advance the execution flow
hello(x)
par hello(y)
```
`par` works in an infix manner between the previously stated function and the next one.
### co
`co` , short for `coroutine`, prefixes an operation to send it to the background. From π-calculus perspective, it's the same as `A | null`, where `null`-process is the one that does nothing and completes instantly.
```haskell
-- Let's send foo to background and continue
co foo()
-- Do something on another peer, not blocking the flow on this one
co on "some peer":
baz()
-- This foo does not wait for baz()
foo()
-- Assume that foo is executed on another machine
co try:
x <- foo()
-- bar will not wait for foo to be executed or even launched
bar()
-- bax will wait for foo to complete
-- if foo failed, x will never resolve
-- and bax will never execute
bax(x)
```
## Join behavior
Join means that data was created by different parallel execution flows and then used on a single peer to perform computations. It works the same way for any parallel blocks, be it `par`, `co` or something else \(`for par`\).
In Aqua, you can refer to previously defined variables. In case of sequential computations, they are available, if execution not failed:
```haskell
-- Start execution somewhere
on peer1:
-- Go to peer1, execute foo, remember x
x <- foo()
-- x is available at this point
on peer2:
-- Go to peer2, execute bar, remember y
y <- bar()
-- Both x and y are available at this point
-- Use them in a function
baz(x, y)
```
Let's make this script parallel: execute `foo` and `bar` on different peers in parallel, then use both to compute `baz`.
```haskell
-- Start execution somewhere
on peer1:
-- Go to peer1, execute foo, remember x
x <- foo()
-- Notice par on the next line: it means, go to peer2 in parallel with peer1
par on peer2:
-- Go to peer2, execute bar, remember y
y <- bar()
-- Remember the contract: either x or y is available at this point
-- As it's enough to execute just one branch to advance further
baz(x, y)
```
What will happen when execution comes to `baz`?
Actually, the script will be executed twice: the first time it will be sent from `peer1`, and the second time from `peer2`. Or another way round: `peer2` then `peer1`, we don't know who is faster.
When execution will get to `baz` for the first time, Aqua VM will realize that it lacks some data that is expected to be computed above in the parallel branch. And halt.
After the second branch executes, VM will be woken up again, reach the same piece of code and realize that now it has enough data to proceed.
This way you can express race \(see [Collection types](../types.md#collection-types) and [Conditional return](conditional.md#conditional-return) for other uses of this pattern\):
```haskell
-- Initiate a stream to write into it several times
results: *string
on peer1:
results <- foo()
par on peer2:
results <- bar()
-- When any result is returned, take the first (the fastest) to proceed
baz(results!0)
```

View File

@ -1,61 +0,0 @@
# Sequential
By default, Aqua code is executed line by line, sequentially.
## Contract
* Data from the first arm is available in the second branch.
* The second arm is executed if and only if the first arm succeeded.
* If any arm failed, then the whole sequence is failed.
* If all arms executed successfully, then the whole sequence is executed successfully.
## Sequential operations
### call arrow
Any runnable piece of code in Aqua is an arrow from its domain to the codomain.
```haskell
-- Call a function
foo()
-- Call a function that returns smth, assign results to a variable
x <- foo()
-- Call an ability function
y <- Peer.identify()
-- Pass an argument
z <- Op.identity(y)
```
When you write `<-`, this means not just "assign results of the function on the right to variable on the left". It means that all the effects are executed: [service](../abilities-and-services.md) may change state, the [topology](../topology.md) may be shifted. But you end up being \(semantically\) on the same peer where you have called the arrow.
### on
`on` denotes the peer where the code must be executed. `on` is handled sequentially, and the code inside is executed line by line by default.
```haskell
func foo():
-- Will be executed where `foo` was executed
bar()
-- Move to another peer
on another_peer:
-- To call bar, we need to leave the peer where we were and get to another_peer
-- It's done automagically
bar()
on third_peer via relay:
-- This is executed on third_peer
-- But we denote that to get to third_peer and to leave third_peer
-- an additional hop is needed: get to relay, then to peer
bar()
-- Will be executed in the `foo` call site again
-- To get from the previous `bar`, compiler will add a hop to relay
bar()
```
See more in the [Topology](../topology.md) section.

View File

@ -1,2 +0,0 @@
# Library \(BuiltIns\)

View File

@ -1,6 +0,0 @@
# Execution flow
Aqua's main goal is to express how the execution flows: moves from peer to peer, forks to parallel flows and then joins back, uses data from one step in another.
As the foundation of Aqua is based on π-calculus, finally flow is decomposed into sequential \(`seq`, `.`\), conditional \(`xor`, `+`\), parallel \(`par`, `|`\) computations and iterations based on data \(`!P`\).

View File

@ -1,126 +0,0 @@
# Conditional
Aqua supports branching: you can return one value or another, recover from the error, or check a boolean expression.
### Contract
Second branch is executed iff the first branch failed.
Second branch has no access to the first branch's data.
Block is considered executed iff any branch was executed successfully.
Block is considered failed iff the second branch fails to execute.
### Conditional operations
#### try
Tries to perform operations or consumes the error if there's no catch or otherwise statement after the try block.
```text
try:
-- If foo fails with an error, execution will continue
-- You should write your logic in a non-blocking fashion:
-- If your code below depends on `x`, it may halt as `x` is not resolved.
-- See Conditional return below for workaround
x <- foo()
```
#### catch
Catches the standard error from `try` block.
```text
try:
foo()
catch e:
logError(e)
```
Type of `e` is:
```text
data LastError:
instruction: string -- What AIR instruction failed
msg: string -- Human-readable error message
peer_id: string -- On what peer the error happened
```
#### if
If corresponds to `match`, `mismatch` extension of π-calculus.
```text
x = true
if x:
-- always executed
foo()
if x == false:
-- never executed
bar()
if x != false:
-- executed
baz()
```
Currently, you may only use one `==`, `!=` operator in the `if` expression or compare with true.
Both operators can be a variable. Variable types must intersect.
#### else
Just the second branch of `if`, in case the condition does not hold.
```text
if true:
foo()
else:
bar()
```
If you want to set a variable based on condition, see Conditional return.
#### otherwise
You may add `otherwise` to provide recovery for any block or expression:
```text
x <- foo()
otherwise:
-- if foo can't be executed, then do bar()
y <- bar()
```
### Conditional return
In Aqua, functions may have only one return expression, which is on the last line of the function block, and conditional expressions cannot define the same variable:
```text
try:
x <- foo()
otherwise:
x <- bar() -- Error: name x was already defined in scope, can't compile
```
So to get the value based on condition, we need to use a [writeable collection](../types.md#collection-types).
```text
-- result may have 0 or more values of type string, and is writeable
resultBox: *string
try:
resultBox <- foo()
otherwise:
resultBox <- bar()
-- now result contains only one value, let's extract it!
result = resultBox!
-- Type of result is string
-- Please note that if there were no writes to resultBox,
-- the first use of result will halt.
-- So you need to be careful about it and ensure that there's always a value.
```

View File

@ -1,89 +0,0 @@
# Iterative
π-calculus has a notion of repetitive process: `!P = P | !P`. That means, you can always fork a new `P` process if you need it.
In Aqua, two operations corresponds to it: you can call a service function \(it's just available when it's needed\), and you can use `for` loop to iterate on collections.
### For expression
In short, `for` looks like the following:
```text
xs: []string
for x <- xs:
y <- foo(x)
-- x and y are not accessible there, you can even redefine them
x <- bar()
y <- baz()
```
### Contract
Iterations of `for` loop are executed sequentially by default.
Variables defined inside for loop are not available outside.
For loop's code has access to all variables above.
For can be executed on a variable of any [Collection type](../types.md#collection-types).
### Conditional for
You can make several trials in a loop, and break once any trial succeeded.
```text
xs: []string
for x <- xs try:
-- Will stop trying once foo succeeds
foo(x)
```
Contract is changed as in [Parallel](parallel.md#contract) flow.
### Parallel for
Running many operations in parallel is the most commonly used pattern for `for`.
```text
xs: []string
for x <- xs par:
on x:
foo()
-- Once the fastest x succeeds, execution continues
-- If you want to make the subsequent execution independent from for,
-- mark it with par, e.g.:
par continueWithBaz()
```
Contract is changed as in [Conditional](conditional.md#contract) flow.
### Export data from for
The way to export data from `for` is the same as in [Conditional return](conditional.md#conditional-return) and [Race patterns](parallel.md#join-behavior).
```text
xs: []string
return: *string
-- can be par, try, or nothing
for x <- xs par:
on x:
return <- foo()
-- Wait for 6 fastest results -- see Join behavior
baz(return!5, return)
```
### For on streams
For on streams is one of the most complex parts of Aqua.
Stream forms CRDT.
Iterations are added when new values are added to a stream.

View File

@ -1,138 +0,0 @@
# Parallel
Parallel execution is where everything becomes shiny.
### Contract
Parallel branches have no access to each other's data. Sync points must be explicit \(see Join behavior\).
If any branch is executed successfully, the flow execution continues.
All the data defined in parallel branches is available in the subsequent code.
### Implementation limitation
Parallel execution has some implementation limitations:
* Parallel means independent execution on different peers
* No parallelism when executing a script on single peer \(although a fix is planned\)
* No concurrency in services: one service instance does only one job in a time. Keep services small \(Wasm limitation\)
We might overcome these limitations later, but for now, plan your application design having this in mind.
### Parallel operations
### par
`par` syntax is derived from π-calculus notation of parallelism: `A | B`
```text
-- foo and bar will be executed in parallel, if possible
foo()
par bar()
-- It's useful to combine `par` with `on` block,
-- to delegate further execution to different peers.
-- In this case execution will continue on two peers, independently
on "peer 1":
x <- foo()
par on "peer 2":
y <- bar()
-- Once any of the previous functions return x or y,
-- execution continues. We don't know the order, so
-- if y is returned first, hello(x) will not execute
hello(x)
hello(y)
-- You can fix it with par
-- What's comes faster, will advance the execution flow
hello(x)
par hello(y)
```
`par` works in infix manner between the previously stated function and the next one.
#### co
`co` , short for `coroutine`, prefixes an operation to send it to background. From π-calculus perspective, it's the same as `A | null`, where `null`-process is the one that does nothing and completes instantly.
```text
-- Let's send foo to background and continue
co foo()
-- Do something on another peer, not blocking the flow on this one
co on "some peer":
baz()
-- This foo does not wait for baz()
foo()
```
### Join behavior
Join means that data was created by different parallel execution flows and then used on a single peer to perform computations. It works the same way for any parallel blocks, be it `par`, `co` or something else \(`for par`\).
In Aqua, you can refer to previously defined variables. In case of sequential computations, they are available, if execution not failed:
```text
-- Start execution somewhere
on peer1:
-- Go to peer1, execute foo, remember x
x <- foo()
-- x is available at this point
on peer2:
-- Go to peer2, execute bar, remember y
y <- bar()
-- Both x and y are available at this point
-- Use them in a function
baz(x, y)
```
Let's make this script parallel: execute `foo` and `bar` on different peers in parallel, then use both to compute `baz`.
```text
-- Start execution somewhere
on peer1:
-- Go to peer1, execute foo, remember x
x <- foo()
-- Notice par on the next line: it means, go to peer2 in parallel with peer1
par on peer2:
-- Go to peer2, execute bar, remember y
y <- bar()
-- Remember the contract: either x or y is available at this point
-- As it's enough to execute just one branch to advance further
baz(x, y)
```
What will happen when execution comes to `baz`?
Actually, the script will be executed twice: first time it will be sent from `peer1`, and second time from `peer2`. Or another way round: `peer2` then `peer1`, we don't know who is faster.
When execution will get to `baz` for the first time, [Aqua VM](../../runtimes/aqua-vm.md) will realize that it lacks some data that is expected to be computed above in the parallel branch. And halt.
After the second branch executes, VM will be woken up again, reach the same piece of code and realize that now it has enough data to proceed.
This way you can express race \(see [Collection types](../types.md#collection-types) and [Conditional return](conditional.md#conditional-return) for other uses of this pattern\):
```text
-- Initiate a stream to write into it several times
results: *string
on peer1:
results <- foo()
par on peer2:
results <- bar()
-- When any result is returned, take the first (the fastest) to proceed
baz(results!0)
```

View File

@ -1,64 +0,0 @@
# Sequential
By default, Aqua code is executed sequentially line by line.
### Contract
Data from the first branch is available in the second branch.
Second branch is executed iff the first branch executed successfully.
If any branch errored, then the whole sequence is errored.
If all branches executed successfully, then the whole seq is executed successfully.
### Sequential operations
#### call arrow
Any runnable piece of code in Aqua is an arrow from its domain to codomain.
```text
-- Call a function
foo()
-- Call a function that returns smth, assign results to a variable
x <- foo()
-- Call an ability function
y <- Peer.identify()
-- Pass an argument
z <- Op.identity(y)
```
When you write `<-`, this means not just "assign results of the function on the right to variable on the left". It means that all the effects are executed: [service](../abilities-and-services.md) may change state, [topology](../topology.md) may be shifted. But you end up in the same topological scope.
#### on
`on` denotes the peer where the code must be executed.
```text
func foo():
-- Will be executed where `foo` was executed
bar()
-- Move to another peer
on another_peer:
-- To call bar, we need to leave the peer where we were and get to another_peer
-- It's done automagically
bar()
on third_peer via relay:
-- This is executed on third_peer
-- But we denote that to get to third_peer and to leave third_peer
-- an additional hop is needed: get to relay, then to peer
bar()
-- Will be executed in the `foo` call site again
-- To get from the previous `bar`, compiler will add a hop to relay
bar()
```
See more in [Topology](../topology.md) section.

View File

@ -1,25 +0,0 @@
# Imports And Exports
An Aqua source file has a head and a body. The body contains function definitions, services, types, constants. The header manages what is imported from other files and what is exported.
### Import Expression
The main way to import a file is via `import` expression:
```haskell
import "@fluencelabs/aqua-lib/builtin.aqua"
func foo():
Op.noop()
```
Aqua compiler takes a source directory and a list of import directories \(usually with `node_modules` as a default\). You can use relative paths to `.aqua` files, relatively to the current file's path, and to import folders.
Everything defined in the file is imported into the current namespace.
### `use` Expression
The `use` expression makes it possible to import a subset of a file, or to alias imports to avoid namespace collisions.
{% embed url="https://github.com/fluencelabs/aqua/issues/30" caption="" %}

View File

@ -1,201 +0,0 @@
---
description: Define where the code is to be executed and how to get there
---
# Topology
Aqua lets developers describe the whole distributed workflow in a single script, link data, recover from errors, implement complex patterns like backpressure, and more. Hence, the network topology is at the heart of Aqua.
Topology in Aqua is declarative: You just need to say where a piece of code must be executed, on what peer, and optionally how to get there. The Aqua compiler will add all the required network hops.
## On expression
`on` expression moves execution to the specified peer:
```haskell
on "my peer":
foo()
```
Here, `foo` is instructed to be executed on a peer with id `my peer`. `on` supports variables of type `string` :
```haskell
-- foo, bar, baz are instructed to be executed on myPeer
on myPeer:
foo()
bar()
baz()
```
## `%init_peer_id%`
There is one custom peer ID that is always in scope: `%init_peer_id%`. It points to the peer that initiated this request.
{% hint style="warning" %}
Using `on %init_peer_id%` is an anti-pattern: There is no way to ensure that init peer is accessible from the currently used part of the network.
{% endhint %}
## More complex scenarios
Consider this example:
```go
func foo():
on "peer foo":
do_foo()
func bar(i: i32):
do_bar()
func baz():
bar(1)
on "peer baz":
foo()
bar(2)
bar(3)
```
Take a minute to think about:
* Where is `do_foo` executed?
* Where is `bar(1)` executed?
* On what node `bar(2)` runs?
* What about `bar(3)`?
Declarative topology definition always works the same way.
* `do_foo` is executed on "peer foo", always.
* `bar(1)` is executed on the same node where `baz` was running. If `baz` is the first called function, then it's `init peer id`.
* `bar(2)` is executed on `"peer baz"`, despite the fact that foo does topologic transition. `bar(2)` is in the scope of `on "peer baz"`, so it will be executed there
* `bar(3)` is executed where `bar(1)` was: in the root scope of `baz`, wherever it was called from
## Accessing peers `via` other peers
In a distributed network it is quite common that a peer is not directly accessible. For example, a browser has no public network interface and you cannot open a socket to a browser at will. Such constraints warrant a `relay` pattern: there should be a well-connected peer that relays requests from a peer to the network and vice versa.
Relays are handled with `via`:
```haskell
-- When we go to some peer from some other peer,
-- the compiler will add an additional hop to some relay
on "some peer" via "some relay":
foo()
-- More complex path: first go to relay2, then to relay1,
-- then to peer. When going out of peer, do it in reverse
on "peer" via relay1 via relay2:
foo()
-- You can pass any collection of strings to relay,
-- and it will go through it if it's defined,
-- or directly if not
func doViaRelayMaybe(peer: string, relayMaybe: ?string):
on peer via relayMaybe:
foo()
```
`on`s nested or delegated in functions work just as you expect:
```haskell
-- From where we are, -> relay1 -> peer1
on "peer1" via "relay1":
-- On peer1
foo()
-- now go -> relay1 -> relay2 -> peer2
-- going to relay1 to exit peer1
-- going to relay2 to enable access to peer2
on "peer2" via "relay2":
-- On peer2
foo()
-- This is executed in the root scope, after we were on peer2
-- How to get there?
-- Compiler knows the path that just worked
-- So it goes -> relay2 -> relay1 -> (where we were)
foo()
```
With `on` and `on ... via`, significant indentation changes the place where the code will be executed, and paths that are taken when execution flow "bubbles up" \(see the last call of `foo`\). It's more efficient to keep the flow as flat as it could. Consider the following change of indentation in the previous script, and how it affects execution:
```haskell
-- From where we are, -> relay1 -> peer1
on "peer1" via "relay1":
-- On peer1
foo()
-- now go -> relay1 -> relay2 -> peer2
-- going to relay1 to exit peer1
-- going to relay2 to enable access to peer2
on "peer2" via "relay2":
-- On peer2
foo()
-- This is executed in the root scope, after we were on peer2
-- How to get there?
-- Compiler knows the path that just worked
-- So it goes -> relay2 -> (where we were)
foo()
```
When the `on` scope is ended, it does not affect any further topology moves. Until you stop indentation, `on` affects the topology and may add additional topology moves, which means more roundtrips and unnecessary latency.
## Callbacks
What if you want to return something to the initial peer? For example, implement a request-response pattern. Or send a bunch of requests to different peers, and render responses as they come, in any order.
This can be done with callback arguments in the entry function:
```go
func run(updateModel: Model -> (), logMessage: string -> ()):
on "some peer":
m <- fetchModel()
updateModel(m)
par on "other peer":
x <- getMessage()
logMessage(x)
```
Callbacks have the [arrow type](types.md#arrow-types).
If you pass just ordinary functions as arrow-type arguments, they will work as if you hardcode them.
```haskell
func foo():
on "peer 1":
doFoo()
func bar(cb: -> ()):
on "peer2":
cb()
func baz():
-- foo will go to peer 1
-- bar will go to peer 2
bar(foo)
```
If you pass a service call as a callback, it will be executed locally on the node where you called it. That might change.
Functions that capture the topologic context of the definition site are planned, not yet there. **Proposed** syntax:
```haskell
func baz():
foo = do (x: u32):
-- Executed there, where foo is called
Srv.call(x)
<- x
-- When foo is called, it will get back to this context
bar(foo)
```
{% embed url="https://github.com/fluencelabs/aqua/issues/183" caption="Issue for adding \`do\` expression" %}
{% hint style="warning" %}
Passing service function calls as arguments is very fragile as it does not track that the service is resolved in the scope of the call. Abilities variance may fix that.
{% endhint %}
## Parallel execution and topology
When blocks are executed in parallel, it is not always necessary to resolve the topology to get to the next peer. The compiler will add topologic hops from the par branch only if data defined in that branch is used down the flow.
{% hint style="danger" %}
What if all branches do not return? Execution will halt. Be careful, use `co` if you don't care about the returned data.
{% endhint %}

View File

@ -1,161 +0,0 @@
# Types
## Scalars
Scalar types follow the Wasm IT notation.
* Unsigned numbers: `u8`, `u16`, `u32`, `u64`
* Signed numbers: `i8`, `i16`, `i32`, `i64`
* Floats: `f32`, `f64`
* Boolean: `bool`
* String: `string`
* Records \(product type\): see below
* Arrays: see Collection Types below
## Literals
You can pass booleans \(true, false\), numbers, double-quoted strings as literals.
## Products
```haskell
data ProductName:
field_name: string
data OtherProduct:
product: ProductName
flag: bool
```
Fields are accessible with the dot operator `.` , e.g. `product.field`.
## Collection Types
Aqua has three different types with variable length, denoted by quantifiers `[]`, `*`, and `?`.
Immutable collection with 0..N values: `[]`
Immutable collection with 0 or 1 value: `?`
Appendable collection with 0..N values: `*`
Any data type can be prepended with a quantifier, e.g. `*u32`, `[][]string`, `?ProductType` are all correct type specifications.
You can access a distinct value of a collection with `!` operator, optionally followed by an index.
Examples:
```haskell
strict_array: []u32
array_of_arrays: [][]u32
element_5 = strict_array!5
element_0 = strict_array!0
element_0_anotherway = strict_array!
-- It could be an argument or any other collection
maybe_value: ?string
-- This ! operator will FAIL if maybe_value is backed by a read-only data structure
-- And will WAIT if maybe_value is backed with a stream (*string)
value = maybe_value!
```
## Arrow Types
Every function has an arrow type that maps a list of input types to an optional output type.
It can be denoted as: `Type1, Type2 -> Result`
In the type definition, the absence of a result is denoted with `()`, e.g., `string -> ()`
The absence of arguments is denoted `-> ()`.That is, this mapping takes no argument and has no return type.
Note that there's no `Unit` type in Aqua: you cannot assign a non-existing result to a value.
```haskell
-- Assume that arrow has type: -> ()
-- This is possible:
arrow()
-- This will lead to error:
x <- arrow()
```
## Type Alias
For convenience, you can alias a type:
```haskell
alias MyAlias = ?string
```
## Type Variance
Aqua is made for composing data on the open network. That means that you want to compose things if they do compose, even if you don't control its source code.
Therefore Aqua follows the structural typing paradigm: if a type contains all the expected data, then it fits. For example, you can pass `u8` in place of `u16` or `i16`. Or `?bool` in place of `[]bool`. Or `*string` instead of `?string` or `[]string`. The same holds for products.
For arrow types, Aqua checks the variance on arguments and contravariance on the return type.
```haskell
-- We expect u32
xs: *u32
-- u16 is less then u32
foo1: -> u16
-- works
xs <- foo1()
-- i32 has sign, so cannot fit into u32
foo2: -> i32
-- will fail
xs <- foo2()
-- Function takes an arrow as an argument
func bar(callback: u32 -> u32): ...
foo3: u16 -> u16
-- Will not work
bar(foo3)
foo4: u16 -> u64
-- Works
bar(foo4)
```
Arrow type `A: D -> C` is a subtype of `A1: D1 -> C1`, if `D1` is a subtype of `D` and `C` is a subtype of `C1`.
## Type Of A Service And A File
A service type is a product of arrows.
```haskell
service MyService:
foo(arg: string) -> bool
-- type of this service is:
data MyServiceType:
foo: string -> bool
```
The file is a product of all defined constants and functions \(treated as arrows\). Type definitions in the file do not go to the file type.
```haskell
-- MyFile.aqua
func foo(arg: string) -> bool:
...
const flag ?= true
-- type of MyFile.aqua
data MyServiceType:
foo: string -> bool
flag: bool
```
{% embed url="https://github.com/fluencelabs/aqua/blob/main/types/src/main/scala/aqua/types/Type.scala" caption="See the types system implementation" %}

View File

@ -1,243 +0,0 @@
# Values
Aqua is all about combining data and computations. The runtime for the compiled Aqua code, [AquaVM](https://github.com/fluencelabs/aquavm), tracks what data comes from what origin, which constitutes the foundation for distributed systems security. That approach, driven by π-calculus and security considerations of open-by-default networks and distributed applications as custom application protocols, also puts constraints on the language that configures it.
Values in Aqua are backed by VDS \(Verifiable Data Structures\) in the runtime. All operations on values must keep the authenticity of data, prooved by signatures under the hood.
That's why values are immutable. Changing the value effectively makes a new one:
```haskell
x = "hello"
y = "world"
-- despite the sources of x and y, z's origin is "peer 1"
-- and we can trust value of z as much as we trust "peer 1"
on "peer 1":
z <- concat(x, y)
```
More on that in the Security section. Now let's see how we can work with values inside the language.
## Arguments
Function arguments are available within the whole function body.
```haskell
func foo(arg: i32, log: string -> ()):
-- Use data arguments
bar(arg)
-- Arguments can have arrow type and be used as strings
log("Wrote arg to responses")
```
## Return values
You can assign the results of an arrow call to a name and use this returned value in the code below.
```haskell
-- Imagine a Stringify service that's always available
service Stringify("stringify"):
i32ToStr(arg: i32) -> string
-- Define the result type of a function
func bar(arg: i32) -> string:
-- Make a value, name it x
x <- Stringify.i32ToStr(arg)
-- Starting from there, you can use x
-- Pass x out of the function scope as the return value
<- x
func foo(arg: i32, log: *string):
-- Use bar to convert arg to string, push that string
-- to logs stream, return nothing
log <- bar(arg)
```
## Literals
Aqua supports just a few literals: numbers, quoted strings, booleans. You [cannot init a structure](https://github.com/fluencelabs/aqua/issues/167) in Aqua, only obtain it as a result of a function call.
```haskell
-- String literals cannot contain double quotes
-- No single-quoted strings allowed, no escape chars.
foo("double quoted string literal")
-- Booleans are true or false
if x == false:
foo("false is a literal")
-- Numbers are different
-- Any number:
bar(1)
-- Signed number:
bar(-1)
-- Float:
bar(-0.2)
```
## Getters
In Aqua, you can use a getter to peak into a field of a product or indexed element in an array.
```haskell
data Sub:
sub: string
data Example:
field: u32
arr: []Sub
child: Sub
func foo(e: Example):
bar(e.field) -- u32
bar(e.child) -- Sub
bar(e.child.sub) -- string
bar(e.arr) -- []Sub
bar(e.arr!) -- gets the 0 element
bar(e.arr!.sub) -- string
bar(e.arr!2) -- gets the 2nd element
bar(e.arr!2.sub) -- string
```
Note that the `!` operator may fail or halt:
* If it is called on an immutable collection, it will fail if the collection is shorter and has no given index; you can handle the error with [try](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/conditional.md#try) or [otherwise](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/conditional.md#otherwise).
* If it is called on an appendable stream, it will wait for some parallel append operation to fulfill, see [Join behavior](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/parallel.md#join-behavior).
{% hint style="warning" %}
The `!` operator can currently only be used with literal indices.
That is,`!2` is valid but`!x` is not valid.
We expect to address this limitation soon.
{% endhint %}
## Assignments
Assignments, `=`, only give a name to a value with an applied getter or to a literal.
```haskell
func foo(arg: bool, e: Example):
-- Rename the argument
a = arg
-- Assign the name b to value of e.child
b = e.child
-- Create a named literal
c = "just string value"
```
## Constants
Constants are like assignments but in the root scope. They can be used in all function bodies, textually below the place of const definition. Constant values must resolve to a literal.
You can change the compilation results by overriding a constant but the override needs to be of the same type or subtype.
```haskell
-- This flag is always true
const flag = true
-- This setting can be overwritten via CLI flag
const setting ?= "value"
func foo(arg: string): ...
func bar():
-- Type of setting is string
foo(setting)
```
## Visibility scopes
Visibility scopes follow the contracts of execution flow.
By default, everything defined textually above is available below. With some exceptions.
Functions have isolated scopes:
```haskell
func foo():
a = 5
func bar():
-- a is not defined in this function scope
a = 7
foo() -- a inside fo is 5
```
[For loop](flow/iterative.md#export-data-from-for) does not export anything from it:
```haskell
func foo():
x = 5
for y <- ys:
-- Can use what was defined above
z <- bar(x)
-- z is not defined in scope
z = 7
```
[Parallel](flow/parallel.md#join-behavior) branches have [no access](https://github.com/fluencelabs/aqua/issues/90) to each other's data:
```haskell
-- This will deadlock, as foo branch of execution will
-- never send x to a parallel bar branch
x <- foo()
par y <- bar(x)
-- After par is executed, all the can be used
baz(x, y)
```
Recovery branches in [conditional flow](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/conditional.md) have no access to the main branch as the main branch exports values, whereas the recovery branch does not:
```haskell
try:
x <- foo()
otherwise:
-- this is not possible will fail
bar(x)
y <- baz()
-- y is not available below
willFail(y)
```
## Streams as literals
Stream is a special data structure that allows many writes. It has [a dedicated article](crdt-streams.md).
To use a stream, you need to initiate it at first:
```haskell
-- Initiate an (empty) appendable collection of strings
resp: *string
-- Write strings to resp in parallel
resp <- foo()
par resp <- bar()
for x <- xs:
-- Write to a stream that's defined above
resp <- baz()
try:
resp <- baz()
otherwise:
on "other peer":
resp <- baz()
-- Now resp can be used in place of arrays and optional values
-- assume fn: []string -> ()
fn(resp)
-- Can call fn with empty stream: you can use it
-- to construct empty values of any collection types
nilString: *string
fn(nilString)
```
One of the most frequently used patterns for streams is [Conditional return](flow/conditional.md#conditional-return).

View File

@ -1,40 +0,0 @@
# Tour De Aqua
* Why Aqua -- not in order
* particle model
* client server model
* p2p + Aqua model
* request-response pattern
* chain-forward pattern
* Note on Marine, Wasm IT
Given an abundance of active and abandoned programming languages, why create another one ? The need for Aqua arises from the desire to maximize the potential afforded by peer-to-peer networks as a distributed hosting environment for services composable into applications and backends.
Figure x: need one new graphic to illustrate both aspects
That is, Aqua provides the capabilities necessary to implement and execute a "full-stack" peer-to-peer programming model where Aqua is used to program the network as well as compose applications from distributed services providing the following benefits:
* Composition without centralization
* Communication, access and execution security as first class zero trust citizens
* Programmable network requests
* Extensible beyond peer-native services to Web2 resources
At the heart of the peer-to-peer programming model -- is this Fluence or Aquamarine ?
* _particle_
## A Taste Of Aqua
or a different example?
```text
service Greeting("service-id"):
greeting: string, bool -> string
func greeter(name: string, greet: bool, node: string, service_id: string) -> string:
on node:
Greeting service_id
res <- Greeting.greeting(name, greet)
<- res
```