mirror of
https://github.com/fluencelabs/aqua-book
synced 2024-12-04 23:30:18 +00:00
GitBook: [#129] Registry: Resources API
This commit is contained in:
parent
12c7a988fd
commit
db8ab13a9b
BIN
.gitbook/assets/Screen Shot 2021-06-29 at 1.06.39 PM.png
Normal file
BIN
.gitbook/assets/Screen Shot 2021-06-29 at 1.06.39 PM.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 982 KiB |
BIN
.gitbook/assets/image.png
Normal file
BIN
.gitbook/assets/image.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 100 KiB |
25
README.md
Normal file
25
README.md
Normal file
@ -0,0 +1,25 @@
|
|||||||
|
# Introduction
|
||||||
|
|
||||||
|
Fluence is an open protocol and a framework for internet or private cloud applications. Fluence provides a peer-to-peer development stack so that you can create applications free of proprietary cloud platforms, centralized APIs, and untrustworthy third-parties. The Fluence stack is open source and is maintained and governed by a community of developers.
|
||||||
|
|
||||||
|
At the core of Fluence is the open-source language **Aqua** that allows for the programming of peer-to-peer scenarios separately from the computations on peers. Applications are turned into hostless workflows over distributed function calls, which enables various levels of decentralization: from handling by a limited set of servers to complete peer-to-peer architecture by connecting user devices directly.
|
||||||
|
|
||||||
|
{% embed url="https://youtu.be/dIUXgdEcUPg" %}
|
||||||
|
Approaching Web3 development with Aqua language
|
||||||
|
{% endembed %}
|
||||||
|
|
||||||
|
{% embed url="https://youtu.be/M_u-EnWrMOQ" %}
|
||||||
|
|
||||||
|
This book is dedicated to all things Aqua and currently in its **alpha** version and we expect to expand both Aqua's breadth and depth coverage over the coming weeks. 
|
||||||
|
|
||||||
|
Stay in touch or contact us via the following channels: 
|
||||||
|
|
||||||
|
* [Discord](https://discord.gg)
|
||||||
|
* [Telegram](https://t.me/fluence\_project)
|
||||||
|
* [Aqua Github](https://github.com/fluencelabs/aqua)
|
||||||
|
* [Youtube](https://www.youtube.com/channel/UC3b5eFyKRFlEMwSJ1BTjpbw)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
35
SUMMARY.md
Normal file
35
SUMMARY.md
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
# Table of contents
|
||||||
|
|
||||||
|
* [Introduction](README.md)
|
||||||
|
* [Getting Started](getting-started/README.md)
|
||||||
|
* [Installation](getting-started/installation.md)
|
||||||
|
* [Quick Start](getting-started/quick-start.md)
|
||||||
|
* [Language](language/README.md)
|
||||||
|
* [Types](language/types.md)
|
||||||
|
* [Values](language/variables.md)
|
||||||
|
* [Topology](language/topology.md)
|
||||||
|
* [Execution flow](language/flow/README.md)
|
||||||
|
* [Sequential](language/flow/sequential.md)
|
||||||
|
* [Conditional](language/flow/conditional.md)
|
||||||
|
* [Parallel](language/flow/parallel.md)
|
||||||
|
* [Iterative](language/flow/iterative.md)
|
||||||
|
* [Abilities & Services](language/abilities-and-services.md)
|
||||||
|
* [CRDT Streams](language/crdt-streams.md)
|
||||||
|
* [Closures](language/closures.md)
|
||||||
|
* [Imports And Exports](language/header/README.md)
|
||||||
|
* [Control, Scope And Visibility](language/header/control-scope-and-visibility.md)
|
||||||
|
* [Expressions](language/expressions/README.md)
|
||||||
|
* [Header](language/expressions/header.md)
|
||||||
|
* [Functions](language/expressions/functions.md)
|
||||||
|
* [Services](language/expressions/services.md)
|
||||||
|
* [Type definitions](language/expressions/type-definitions.md)
|
||||||
|
* [Overrideable constants](language/expressions/overrideable-constants.md)
|
||||||
|
* [Libraries](libraries/README.md)
|
||||||
|
* [@fluencelabs/aqua-lib](libraries/aqua-lib.md)
|
||||||
|
* [@fluencelabs/aqua-ipfs](libraries/aqua-ipfs.md)
|
||||||
|
* [@fluencelabs/registry](libraries/registry.md)
|
||||||
|
* [Aqua CLI](aqua-cli/README.md)
|
||||||
|
* [Service Management](aqua-cli/service-management.md)
|
||||||
|
* [Scheduling Scripts](aqua-cli/scheduling-scripts.md)
|
||||||
|
* [Peer state info](aqua-cli/peer-state-info.md)
|
||||||
|
* [Changelog](changelog.md)
|
155
aqua-cli/README.md
Normal file
155
aqua-cli/README.md
Normal file
@ -0,0 +1,155 @@
|
|||||||
|
# Aqua CLI
|
||||||
|
|
||||||
|
Aqua CLI allows you to manage all aspects of [Aqua](https://doc.fluence.dev/aqua-book/) development and includes:
|
||||||
|
|
||||||
|
* Compiler
|
||||||
|
* Client Peer
|
||||||
|
* Utilities
|
||||||
|
|
||||||
|
To install the Aqua CLI package:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install -g @fluencelabs/aqua
|
||||||
|
```
|
||||||
|
|
||||||
|
### Compiler
|
||||||
|
|
||||||
|
The compiler turns high-level Aqua code into either JS/TS handlers wrapping AIR, the default, or pure AIR.
|
||||||
|
|
||||||
|
The quickest way to compile Aqua code is to take all `.aqua` files from the specified _input_ directory, e.g., `src/aqua`, and place the generated JavaScript code in some _output_ directory of your choosing, e.g. `src/generated`. Please note that if the specified output directory does not exist, the CLI creates it for you:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aqua --input src/aqua --output src/generated
|
||||||
|
```
|
||||||
|
|
||||||
|
Of course, we can be more specific and name a filename:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aqua --input src/aqua/some_file.aqua --output src/generated
|
||||||
|
```
|
||||||
|
|
||||||
|
As mentioned in the intro, the Aqua compiler generates `.js` with `.d.ts` TypeScript files by default. Output files will contain functions exported from `.aqua` files and methods for registering defined services. You can read more about calling functions and service registration in the [FluenceJS documentation](https://doc.fluence.dev/docs/fluence-js/3\_in\_depth).
|
||||||
|
|
||||||
|
Additional compiler options are:
|
||||||
|
|
||||||
|
* `--js` flag, which generates only `.js` files
|
||||||
|
* `--air` or `-a` flag, which generates pure AIR code
|
||||||
|
* `--scheduled`, which generates AIR code suitable for script storage
|
||||||
|
|
||||||
|
Use `aqua --help` for a complete listing of available flags, subcommands and explanations.
|
||||||
|
|
||||||
|
### Subcommands
|
||||||
|
|
||||||
|
The CLI provides additional features and utilities via subcommands.
|
||||||
|
|
||||||
|
#### Aqua Run
|
||||||
|
|
||||||
|
The `aqua run` command creates a one-shot client peer over the compiled Aqua code specified allowing you to quickly and effectively test Aqua code against deployed services on the network.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aqua run --addr <relay multidaddress> --input <your aqua filepath> --func '<function name>(<args>)'
|
||||||
|
```
|
||||||
|
|
||||||
|
For the following Aqua script:
|
||||||
|
|
||||||
|
```python
|
||||||
|
-- some-dir/hello.aqua
|
||||||
|
service Hello("service_id"):
|
||||||
|
hello(name:string) -> string
|
||||||
|
|
||||||
|
func hello(name: string, node:string, sid: string) -> string:
|
||||||
|
on node:
|
||||||
|
Hello sid
|
||||||
|
res <- Hello.hello(name)
|
||||||
|
<- res
|
||||||
|
```
|
||||||
|
|
||||||
|
We instantiate our aqua client peer:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aqua run --addr /dns4/.../wss/p2p/12D3 ...aoHI --input some-dir/hello.aqua --func 'hello("reader", "peer id", "service id")'
|
||||||
|
```
|
||||||
|
|
||||||
|
The `aqua run` command provides additional features such as:
|
||||||
|
|
||||||
|
* `--sk` or `-s` allows you to provide your secret key (sk) in base64
|
||||||
|
* `--addr` or `-a` allows you to specify a relay in [_multiaddr_](https://github.com/multiformats/multiaddr) format, e.g., `/dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi` . The `aqua config default_peers <krasnodar, testnet, stage>` command enumerates you the respective network multiaddresses of available Fluence nodes. 
|
||||||
|
|
||||||
|
`--import` or `-m` allows you to [import functionality](https://doc.fluence.dev/aqua-book/language/header) from one or more source folders by using the flag repeatedly
|
||||||
|
|
||||||
|
* `--data` or `-d` allows you to specify data arguments as a json map:
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua run --addr /dns4/.../wss/p2p/12D3 ... oHI --input my_code.aqua --func 'my_aqua_func(a, b)' --data '{"a": "some_string", "b": 123}'
|
||||||
|
```
|
||||||
|
* `--data-path` or `p` allows you to specify data arguments, see `--data`, as a file. _Note that `--data` and `--data-path` are mutually exclusive._
|
||||||
|
|
||||||
|
Use `aqua run --help` for a complete listing of available flags and explanations.
|
||||||
|
|
||||||
|
#### Aqua Create Keypair
|
||||||
|
|
||||||
|
The `aqua key create` utility allows you to create an ed25519-based keypair in _base64_:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aqua key create
|
||||||
|
```
|
||||||
|
|
||||||
|
And produces the following json document:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"peerId": "12D3KooWMC6picJQNruwFMFWqP62FWLtbM94TGYEzCsthsKa46CQ",
|
||||||
|
"secretKey": "QG3Ot2i1kD4Mpw0RpsKtUjbA/0XjZ0WP7dajDBwLQi0=",
|
||||||
|
"publicKey": "CAESIKkB+6eYhFDsEZhn0u+xwIKVhE+1xvgJoV5/csc+CS6R"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Aqua Module Distribution
|
||||||
|
|
||||||
|
A critical step is to get our WASm modules and service configuration files to our target hosts. The Aqua cli provides the capability to upload and configure our assets into hosted services under the `aqua remote` namespace:
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua remote --help
|
||||||
|
Usage:
|
||||||
|
...
|
||||||
|
aqua remote deploy_service
|
||||||
|
aqua remote remove_service
|
||||||
|
...
|
||||||
|
|
||||||
|
Subcommands:
|
||||||
|
...
|
||||||
|
deploy_service
|
||||||
|
Deploy service from WASM modules
|
||||||
|
remove_service
|
||||||
|
Remove service
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
See the [service management](service-management.md) section for details and examples.
|
||||||
|
|
||||||
|
#### Aqua Environments Listing
|
||||||
|
|
||||||
|
The `aqua config default_peers` utility shows a list of peers in [multiaddr](https://github.com/multiformats/multiaddr) format for a specific Fluence network. Currently, there are three environments: `krasnodar`, the default network,`stage` and `testnet`.
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua config default_peers testnet
|
||||||
|
```
|
||||||
|
|
||||||
|
shows a list of `testnet` peers:
|
||||||
|
|
||||||
|
```
|
||||||
|
dns4/net01.fluence.dev/tcp/19001/wss/p2p/12D3KooWEXNUbCXooUwHrHBbrmjsrpHXoEphPwbjQXEGyzbqKnE9
|
||||||
|
/dns4/net01.fluence.dev/tcp/19990/wss/p2p/12D3KooWMhVpgfQxBLkQkJed8VFNvgN4iE6MD7xCybb1ZYWW2Gtz
|
||||||
|
/dns4/net02.fluence.dev/tcp/19001/wss/p2p/12D3KooWHk9BjDQBUqnavciRPhAYFvqKBe4ZiPPvde7vDaqgn5er
|
||||||
|
/dns4/net03.fluence.dev/tcp/19001/wss/p2p/12D3KooWBUJifCTgaxAUrcM9JysqCcS4CS8tiYH5hExbdWCAoNwb
|
||||||
|
/dns4/net04.fluence.dev/tcp/19001/wss/p2p/12D3KooWJbJFaZ3k5sNd8DjQgg3aERoKtBAnirEvPV8yp76kEXHB
|
||||||
|
/dns4/net05.fluence.dev/tcp/19001/wss/p2p/12D3KooWCKCeqLPSgMnDjyFsJuWqREDtKNHx1JEBiwaMXhCLNTRb
|
||||||
|
/dns4/net06.fluence.dev/tcp/19001/wss/p2p/12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH
|
||||||
|
/dns4/net07.fluence.dev/tcp/19001/wss/p2p/12D3KooWBSdm6TkqnEFrgBuSkpVE3dR1kr6952DsWQRNwJZjFZBv
|
||||||
|
/dns4/net08.fluence.dev/tcp/19001/wss/p2p/12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H
|
||||||
|
/dns4/net09.fluence.dev/tcp/19001/wss/p2p/12D3KooWF7gjXhQ4LaKj6j7ntxsPpGk34psdQicN2KNfBi9bFKXg
|
||||||
|
/dns4/net10.fluence.dev/tcp/19001/wss/p2p/12D3KooWB9P1xmV3c7ZPpBemovbwCiRRTKd3Kq2jsVPQN4ZukDf
|
||||||
|
```
|
||||||
|
|
21
aqua-cli/peer-state-info.md
Normal file
21
aqua-cli/peer-state-info.md
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
# Peer state info
|
||||||
|
|
||||||
|
Information about services, modules and blueprints is stored on peers. Let's look at the commands by which you can get this information.
|
||||||
|
|
||||||
|
This is how you can get a list of possible commands:
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua remote --help
|
||||||
|
|
||||||
|
...
|
||||||
|
list_modules
|
||||||
|
List all modules on a peer
|
||||||
|
list_blueprints
|
||||||
|
List all blueprints on a peer
|
||||||
|
list_interfaces
|
||||||
|
List all service interfaces on a peer by a given owner
|
||||||
|
get_interface
|
||||||
|
Show interface of a service
|
||||||
|
|
||||||
|
```
|
||||||
|
|
83
aqua-cli/scheduling-scripts.md
Normal file
83
aqua-cli/scheduling-scripts.md
Normal file
@ -0,0 +1,83 @@
|
|||||||
|
---
|
||||||
|
description: Scheduling Script
|
||||||
|
---
|
||||||
|
|
||||||
|
# Scheduling Scripts
|
||||||
|
|
||||||
|
Using Scheduled scripts it is possible to decouple service execution from the client and instead rely on a cron-like scheduler running on a node to trigger the service calls. 
|
||||||
|
|
||||||
|
To schedule an Aqua function, all argument values must be literal (strings, numbers, or bools). There is no error handling in these scripts as there is no service to send errors to, so, you need to handle errors on your own. 
|
||||||
|
|
||||||
|
#### Add Script
|
||||||
|
|
||||||
|
You can add script as follows:
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua script add --sk secret-key -i path/to/aqua/file --func 'someFunc(arg1, arg2, "literal string")' --addr relay/multiadd --data-path path/to/json/with/args --interval 100
|
||||||
|
```
|
||||||
|
|
||||||
|
`--sk` is a secret key. You cannot delete script with different secret key.
|
||||||
|
|
||||||
|
`-i` is a path to an aqua file with your function
|
||||||
|
|
||||||
|
`--func` function with arguments that will be scheduled
|
||||||
|
|
||||||
|
`--addr` your relay
|
||||||
|
|
||||||
|
Use `--on` if you want to schedule a script on another node (not on the relay)
|
||||||
|
|
||||||
|
`--data-path` path to a JSON file with arguments
|
||||||
|
|
||||||
|
`--data` JSON string with arguments
|
||||||
|
|
||||||
|
`--interval` how often your script will be called, in seconds. If the option is not specified, then the script will be run only once
|
||||||
|
|
||||||
|
Output:
|
||||||
|
|
||||||
|
```
|
||||||
|
Script was scheduled
|
||||||
|
"dfc9cb4f-46be-48cb-a742-d3e23d03b6cf"
|
||||||
|
```
|
||||||
|
|
||||||
|
Where the last string is a script id. It could be used to remove your script.
|
||||||
|
|
||||||
|
#### Remove script
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua script remove --sk secret_key --addr /relay/multiadds --script-id script_id
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
|
||||||
|
```
|
||||||
|
Script was removed
|
||||||
|
```
|
||||||
|
|
||||||
|
#### List scheduled scripts
|
||||||
|
|
||||||
|
You can get info aboud all scheduled scripts on node:
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua script list --addr /relay/addr
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
|
||||||
|
```
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"failures": 0,
|
||||||
|
"id": "d1683d7e-cd9f-4c02-802e-250d800177d4",
|
||||||
|
"interval": "1h",
|
||||||
|
"owner": "12D3KooWDp7qmZDh83GUvSnfb43W5vqogwBrgMhEHA7nqdtgJJ3w",
|
||||||
|
"src": "air-code-here"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"failures": 0,
|
||||||
|
"id": "3890e3d6-ae4a-45bb-9ab4-229cfee2893c",
|
||||||
|
"interval": "1day",
|
||||||
|
"owner": "12D3KooWDp7qmZDh83GUvSnfb43W5vqogwBrgMhEHA7nqdtgJJ3w",
|
||||||
|
"src": "air-code-here"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
136
aqua-cli/service-management.md
Normal file
136
aqua-cli/service-management.md
Normal file
@ -0,0 +1,136 @@
|
|||||||
|
# Service Management
|
||||||
|
|
||||||
|
#### Deploy A Service
|
||||||
|
|
||||||
|
The `aqua remote deploy_service` command allows you to deploy Wasm modules and associate service configuration specifications to some peer, where the command line structure looks like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua remote deploy_service \
|
||||||
|
--addr multiaddr \
|
||||||
|
--data-path configs/service_config.json \
|
||||||
|
--service service_name \
|
||||||
|
--sk your_secret_key
|
||||||
|
```
|
||||||
|
|
||||||
|
where `--addr` is the multiaddr of some relay to connect to the network of choice. Use the `--on peerId` option to deploy the service to a peer other than the connection relay.
|
||||||
|
|
||||||
|
`Note that --config-path` is a path to a service configuration file that includes the path to the Wasm modules. 
|
||||||
|
|
||||||
|
Consider the following service configuration JSON template:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
"service_name": {
|
||||||
|
"modules": [
|
||||||
|
{
|
||||||
|
"name": "module_name",
|
||||||
|
"path": "/path/to/wasm",
|
||||||
|
"mounted_binaries": [["command_line_tool", "/path/to/command_line_tool"]],
|
||||||
|
"logger_enabled": [true]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "second_module_name",
|
||||||
|
"path": "path/to/second_module.wasm",
|
||||||
|
"logger_enabled": [true],
|
||||||
|
"mapped_dirs": [["/path/to/dir"]],
|
||||||
|
"preopened_files": [["/path/to/file"]],
|
||||||
|
"envs": [["env1", "env2"]],
|
||||||
|
"max_heap_size": ["100Mb"]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"another_service_name": {
|
||||||
|
"modules": [
|
||||||
|
{
|
||||||
|
"name": "third_module",
|
||||||
|
"path": "./artifacts/mean_service.wasm",
|
||||||
|
"logger_enabled": [true]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
It is important to note that in a single configuration file, you can specify multiple service configurations, which are called from Aqua with `--service-name` flag referencing the respective config name, e.g., `service_name` and `another_service_name` respectively. For more information see the builtin [aqua-lib reference](https://github.com/fluencelabs/aqua-lib/blob/main/builtin.aqua#L206).
|
||||||
|
|
||||||
|
``
|
||||||
|
|
||||||
|
`--sk secret_key` a secret key that you can get with `aqua create_keypair`. It is a required field, cause you cannot manage or delete your service without this key.
|
||||||
|
|
||||||
|
Command output:
|
||||||
|
|
||||||
|
```
|
||||||
|
Your peerId: 12D3KooWCNDmmX9wGsbUuhP99A1V7tc2GseVq3Wx6KvRWNFck1Yx
|
||||||
|
"Going to upload a module..."
|
||||||
|
2022.02.17 15:55:12 [INFO] created ipfs client to /ip4/164.90.164.229/tcp/5001
|
||||||
|
2022.02.17 15:55:12 [INFO] connected to ipfs
|
||||||
|
2022.02.17 15:55:14 [INFO] file uploaded
|
||||||
|
"Now time to make a blueprint..."
|
||||||
|
"Blueprint id:"
|
||||||
|
"6f438ff43b9d5e7f980992e339a3eeef7da4d0e2fefca8f6229e440b509d454d"
|
||||||
|
"And your service id is:"
|
||||||
|
"a1076a0f-091b-4c80-84f3-1582cb02ecd9"
|
||||||
|
```
|
||||||
|
|
||||||
|
Here we see that modules are uploading to the node first. Then creating a blueprint. The last line is the most important. Service id will be used for service discovery, management and removal.
|
||||||
|
|
||||||
|
#### Remove A Service
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua remote remove_service \
|
||||||
|
--addr multiaddr \
|
||||||
|
--sk your_secret_key \
|
||||||
|
--id service_id
|
||||||
|
```
|
||||||
|
|
||||||
|
To remove service you need to know the host peer, service id and secret key. If the service location is different from your relay, use the `--on peerId` option to provide the required peer location. Since only a service owner is authorized to delete a service, the secret key for the service needs to be provided.
|
||||||
|
|
||||||
|
#### Manual deploy
|
||||||
|
|
||||||
|
Deploying a service is a step-by-step process where the script will upload WASM modules, create blueprints and then will create services out of blueprints. You can do it manually to have more control and write your own scripts.
|
||||||
|
|
||||||
|
Upload a module to IPFS:
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua ipfs upload --path /path/to/file \
|
||||||
|
--addr /peer/multiaddress \
|
||||||
|
--sk secret_key (if needed)
|
||||||
|
```
|
||||||
|
|
||||||
|
You will get CID of your module. Then you need to add a module from a vault. Right now you can do it with aqua service:
|
||||||
|
|
||||||
|
```
|
||||||
|
res <- Ipfs.get(cid)
|
||||||
|
conf <- Dist.make_module_config(...)
|
||||||
|
hash <- Dist.add_module_from_vault(res.path, conf)
|
||||||
|
```
|
||||||
|
|
||||||
|
`hash` - is a module hash that can be used in blueprint creation.
|
||||||
|
|
||||||
|
Create a blueprint:
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua remote add_blueprint --name blueprint_name \
|
||||||
|
--addr /peer/multiaddress \
|
||||||
|
--dependency hash:ipfs_hash1
|
||||||
|
--dependency hash:ipfs_hash2
|
||||||
|
```
|
||||||
|
|
||||||
|
Blueprint id will be as a result.
|
||||||
|
|
||||||
|
Create a service from a blueprint:
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua remote create_service --name blueprint_name \
|
||||||
|
--addr /peer/multiaddress \
|
||||||
|
--id blueprint_id
|
||||||
|
```
|
||||||
|
|
||||||
|
Service id will be as a result.
|
||||||
|
|
||||||
|
Also, you can create services manually from Aqua language. You can have a close look at these examples and services:
|
||||||
|
|
||||||
|
{% embed url="https://github.com/fluencelabs/aqua-lib/blob/main/builtin.aqua#L189" %}
|
||||||
|
|
||||||
|
{% embed url="https://github.com/fluencelabs/aqua/blob/main/npm/aqua/dist.aqua#L36" %}
|
153
changelog.md
Normal file
153
changelog.md
Normal file
@ -0,0 +1,153 @@
|
|||||||
|
# Changelog
|
||||||
|
|
||||||
|
Aqua compiler's versioning scheme is the following: `0.BREAKING.ENHANCING.RELEASE`
|
||||||
|
|
||||||
|
* `0` shows that Aqua does not meet its vision yet, so syntax and semantics can change quickly
|
||||||
|
* `BREAKING` part is incremented for each breaking change when old `.aqua` files need to be updated to compile with the new version
|
||||||
|
* `ENHANCING` part is incremented for every syntax addition
|
||||||
|
* `RELEASE` is the release number, shows internal compiler changes, bugfixes that keep the language untouched
|
||||||
|
|
||||||
|
### [0.7.1](https://github.com/fluencelabs/aqua/releases/tag/0.7.1) – March 25, 2022
|
||||||
|
|
||||||
|
* Now Aqua supports [arithmetic operators](language/variables.md#arithmetic-operators) (e.g. `1 + x`), arrow calls in any place (e.g. `for x <- foo(3 + bar()) par`...) and simple comparison (e.g. `if a > 3`) ([#461](https://github.com/fluencelabs/aqua/pull/461) -> [#410](https://github.com/fluencelabs/aqua/issues/410))
|
||||||
|
* Aqua CLI fixes ([#466](https://github.com/fluencelabs/aqua/pull/466), [#465](https://github.com/fluencelabs/aqua/pull/465), [#464](https://github.com/fluencelabs/aqua/pull/464), [#462](https://github.com/fluencelabs/aqua/pull/462))
|
||||||
|
|
||||||
|
### [0.7.0](https://github.com/fluencelabs/aqua/releases/tag/0.7.0) – March 22, 2022
|
||||||
|
|
||||||
|
* Moving all features from deprecated `fldist` to `aqua`. All interactions with peers moved to `aqua remote` subcommand. Descriptions of all commands can be found in [Aqua book](https://doc.fluence.dev/aqua-book/aqua-cli) ([#457](https://github.com/fluencelabs/aqua/pull/457))
|
||||||
|
* Update FluenceJS to 0.21.5 ([#456](https://github.com/fluencelabs/aqua/pull/456))
|
||||||
|
* Switching to v3 FluenceJS API. Improves JS support for optional Aqua types ([#453](https://github.com/fluencelabs/aqua/pull/453))
|
||||||
|
* Add message when function not found ([#454](https://github.com/fluencelabs/aqua/pull/454))
|
||||||
|
|
||||||
|
### [0.6.4](https://github.com/fluencelabs/aqua/releases/tag/0.6.4) – March 15, 2022
|
||||||
|
|
||||||
|
* [Closures capture their topologic context](language/closures.md) now ([#356](https://github.com/fluencelabs/aqua/issues/356))
|
||||||
|
* Small changes ([#452](https://github.com/fluencelabs/aqua/pull/452), [#449](https://github.com/fluencelabs/aqua/pull/449), [#450](https://github.com/fluencelabs/aqua/pull/450))
|
||||||
|
|
||||||
|
### [0.6.3](https://github.com/fluencelabs/aqua/releases/tag/0.6.3) – March 4, 2022
|
||||||
|
|
||||||
|
* Added [collections creation syntax](language/variables.md#collections) ([#445](https://github.com/fluencelabs/aqua/pull/445))
|
||||||
|
|
||||||
|
### [0.6.2](https://github.com/fluencelabs/aqua/releases/tag/0.6.2) – February 24, 2022
|
||||||
|
|
||||||
|
* Added top and bottom types to the parser – will be used for debugging functions ([#442](https://github.com/fluencelabs/aqua/pull/442))
|
||||||
|
* [Schedule scripts](aqua-cli/scheduling-scripts.md) using Aqua CLI ([#440](https://github.com/fluencelabs/aqua/pull/440))
|
||||||
|
* Better timeouts handling for CLI ([#437](https://github.com/fluencelabs/aqua/pull/437))
|
||||||
|
|
||||||
|
### [0.6.1](https://github.com/fluencelabs/aqua/releases/tag/0.6.1) – February 16, 2022
|
||||||
|
|
||||||
|
* `aqua dist deploy` to deploy a service to the Fluence network ([#413](https://github.com/fluencelabs/aqua/pull/413), [#419](https://github.com/fluencelabs/aqua/pull/419), [#422](https://github.com/fluencelabs/aqua/pull/422), [#431](https://github.com/fluencelabs/aqua/pull/431))
|
||||||
|
* `aqua dist remove` to remove a deployed service ([#428](https://github.com/fluencelabs/aqua/pull/428))
|
||||||
|
* `aqua env` to show a list of known Fluence peers ([#434](https://github.com/fluencelabs/aqua/pull/434))
|
||||||
|
* Many, many bugfixes ([#414](https://github.com/fluencelabs/aqua/pull/414), [#415](https://github.com/fluencelabs/aqua/pull/415), [#420](https://github.com/fluencelabs/aqua/pull/420), [#426](https://github.com/fluencelabs/aqua/issues/426), [#427](https://github.com/fluencelabs/aqua/issues/427))
|
||||||
|
* Dependencies updated ([#430](https://github.com/fluencelabs/aqua/pull/430))
|
||||||
|
|
||||||
|
### [0.6.0](https://github.com/fluencelabs/aqua/releases/tag/0.6.0) – February 4, 2022
|
||||||
|
|
||||||
|
* Big internal refactoring for better testability & inline syntax additions ([#403](https://github.com/fluencelabs/aqua/pull/403)) – breaks backward compatibility due to breaking change in Fluence-JS
|
||||||
|
* Join expression was changed to generate `noop` ([#406](https://github.com/fluencelabs/aqua/pull/406))
|
||||||
|
* Now can use default imports for `aqua` compile just like in `aqua run` ([#400](https://github.com/fluencelabs/aqua/issues/400))
|
||||||
|
* Added helper for `aqua run` development process ([#407](https://github.com/fluencelabs/aqua/pull/407))
|
||||||
|
* Various bugfixes ([#412](https://github.com/fluencelabs/aqua/pull/412), [#405](https://github.com/fluencelabs/aqua/pull/405), [#397](https://github.com/fluencelabs/aqua/issues/397))
|
||||||
|
|
||||||
|
### [0.5.3](https://github.com/fluencelabs/aqua/releases/tag/0.5.3) – January 13, 2022
|
||||||
|
|
||||||
|
* New expression: [explicit `join` to wait](language/flow/parallel.md#explicit-join-expression) for results computed in parallel branches ([#402](https://github.com/fluencelabs/aqua/pull/402))
|
||||||
|
* New syntax to [access a collection element by index](language/variables.md#getters): `array[5]`. With this syntax, non-literal indices are allowed, like `array[conf.length]` ([#401](https://github.com/fluencelabs/aqua/pull/401))
|
||||||
|
* Refactoring of the compiler's internals: introducing `raw` model for values ([#398](https://github.com/fluencelabs/aqua/pull/398))
|
||||||
|
* New network monitoring functions are added to [CLI](aqua-cli/#aqua-run) ([#393](https://github.com/fluencelabs/aqua/pull/393))
|
||||||
|
* Small improvements and bug fixes ([#395](https://github.com/fluencelabs/aqua/pull/395), [#396](https://github.com/fluencelabs/aqua/pull/396), [#394](https://github.com/fluencelabs/aqua/pull/394), [#392](https://github.com/fluencelabs/aqua/pull/392))
|
||||||
|
|
||||||
|
### [0.5.2](https://github.com/fluencelabs/aqua/releases/tag/0.5.2) – December 24, 2021
|
||||||
|
|
||||||
|
* [Topology transformations](https://github.com/fluencelabs/aqua/tree/main/model/transform) were completely rewritten: the same Aqua scripts may produce different AIR, probably more efficient, but new bugs might be introduced as well ([#371](https://github.com/fluencelabs/aqua/pull/371))
|
||||||
|
* CLI: as an effort to move all the Fluence services management routines to Aqua, uploading files to Fluence's companion IPFS is now available via Aqua CLI ([#390](https://github.com/fluencelabs/aqua/pull/390))
|
||||||
|
* CLI: bugfixes ([#388](https://github.com/fluencelabs/aqua/pull/388))
|
||||||
|
|
||||||
|
### [0.5.1](https://github.com/fluencelabs/aqua/releases/tag/0.5.1) – December 10, 2021
|
||||||
|
|
||||||
|
* CLI: Support for [secret key](aqua-cli/#aqua-create-keypair) in `aqua run` ([#375](https://github.com/fluencelabs/aqua/pull/375))
|
||||||
|
* CLI: Add log level, print generated AIR ([#368](https://github.com/fluencelabs/aqua/pull/368))
|
||||||
|
* Improved topology calculation in `par` blocks ([#369](https://github.com/fluencelabs/aqua/pull/369))
|
||||||
|
* JAR file is not pushed to releases anymore. JS is the sole compilation target now
|
||||||
|
* CLI: path to [@fluencelabs/aqua-lib](https://doc.fluence.dev/aqua-book/libraries/aqua-lib) is provided as an imports folder by default. `import "@fluencelabs/aqua-lib/builtins"` should always work now, even outside of an NPM project ([#384](https://github.com/fluencelabs/aqua/pull/384))
|
||||||
|
* CLI: Pass arguments to `aqua run` as JSON via `--data` or `--data-path` flag ([#386](https://github.com/fluencelabs/aqua/pull/386))
|
||||||
|
|
||||||
|
### [0.5.0](https://github.com/fluencelabs/aqua/releases/tag/0.5.0) – November 24, 2021
|
||||||
|
|
||||||
|
* Breaking semantic change: [Stream restrictions](language/crdt-streams.md#stream-restrictions). This fixes many obscure bugs which happened when using streams inside `for` cycles ([#321](https://github.com/fluencelabs/aqua/issues/321))
|
||||||
|
* This version of Aqua is not compatible with `fldist` so far (cannot run the emitted `AIR` via `fldist`). Use `aqua run` to run Aqua instead ([#358](https://github.com/fluencelabs/aqua/pull/358))
|
||||||
|
* Added timeout parameter support for `aqua run` ([#360](https://github.com/fluencelabs/aqua/pull/360))
|
||||||
|
* You need to update [FluenceJS to 0.15.0](changelog.md#0.5.0-november-24-2021)+ and [Fluence Node to v0.0.23](https://github.com/fluencelabs/node-distro/releases/tag/v0.0.23)+ for Aqua 0.5 support, previous versions will not work.
|
||||||
|
|
||||||
|
### [0.4.1](https://github.com/fluencelabs/aqua/releases/tag/0.4.1) – November 10, 2021
|
||||||
|
|
||||||
|
* New language feature: [closures](language/closures.md) ([#327](https://github.com/fluencelabs/aqua/pull/327))
|
||||||
|
* New CLI option `--scheduled` to compile Aqua for the Fluence's [Script Storage](libraries/aqua-lib.md) ([#355](https://github.com/fluencelabs/aqua/pull/355))
|
||||||
|
* Bugfixes for using streams to construct more complex streams ([#277](https://github.com/fluencelabs/aqua/issues/277))
|
||||||
|
* Better errors rendering ([#322](https://github.com/fluencelabs/aqua/issues/322), [#337](https://github.com/fluencelabs/aqua/issues/337))
|
||||||
|
* Bugfix for comparing Option types ([#343](https://github.com/fluencelabs/aqua/issues/343))
|
||||||
|
|
||||||
|
### [0.4.0](https://github.com/fluencelabs/aqua/releases/tag/0.4.0) – October 25, 2021
|
||||||
|
|
||||||
|
* Now Aqua compiler emits JS/TS code for [Fluence JS 0.14](https://www.npmjs.com/package/@fluencelabs/fluence). The new JS/TS SDK is heavily rewritten to [support async service functions declaration](https://doc.fluence.dev/docs/fluence-js/3\_in\_depth#using-asynchronous-code-in-callbacks). It also embeds a deeply refactored [AquaVM](https://github.com/fluencelabs/aquavm). ([#334](https://github.com/fluencelabs/aqua/pull/334))
|
||||||
|
* Various bugfixes for AIR generation and the compiler behavior ([#328](https://github.com/fluencelabs/aqua/pull/328), [#335](https://github.com/fluencelabs/aqua/pull/335), [#336](https://github.com/fluencelabs/aqua/pull/336), [#338](https://github.com/fluencelabs/aqua/pull/338))
|
||||||
|
|
||||||
|
### [0.3.2](https://github.com/fluencelabs/aqua/releases/tag/0.3.2) – October 13, 2021
|
||||||
|
|
||||||
|
* Experimental feature: now can run Aqua from Aqua CLI ([#324](https://github.com/fluencelabs/aqua/pull/324)):
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua run -i aqua/caller.aqua -f "callFunc(\"arg1\",\"arg2\")"
|
||||||
|
```
|
||||||
|
|
||||||
|
* Many performance-related updates, compiler now runs faster ([#308](changelog.md#0.3.2-october-13-2021), [#324](https://github.com/fluencelabs/aqua/pull/324)) 
|
||||||
|
* UX improvements for CLI and JS/TS backend ([#307](https://github.com/fluencelabs/aqua/pull/307), [#313](https://github.com/fluencelabs/aqua/pull/313), [#303](https://github.com/fluencelabs/aqua/pull/303), [#305](https://github.com/fluencelabs/aqua/pull/305), [#301](https://github.com/fluencelabs/aqua/pull/301), [#302](https://github.com/fluencelabs/aqua/pull/302))
|
||||||
|
|
||||||
|
### [0.3.1](https://github.com/fluencelabs/aqua/releases/tag/0.3.1) – September 13, 2021
|
||||||
|
|
||||||
|
* Now `.aqua` extension in imports is optional: you may `import "file.aqua"` or just `import "file"` with the same effect ([#292](https://github.com/fluencelabs/aqua/pull/292))
|
||||||
|
* CLI improvements: `--dry` run ([#290](https://github.com/fluencelabs/aqua/pull/290)), output directory is created if not present ([#287](https://github.com/fluencelabs/aqua/pull/287))
|
||||||
|
* Many bugfixes: for imports ([#289](https://github.com/fluencelabs/aqua/pull/289)), TypeScript backend ([#285](https://github.com/fluencelabs/aqua/pull/285), [#294](https://github.com/fluencelabs/aqua/pull/294), [#298](https://github.com/fluencelabs/aqua/pull/298)), and language semantics ([#275](https://github.com/fluencelabs/aqua/issues/275)).
|
||||||
|
|
||||||
|
### [0.3.0](https://github.com/fluencelabs/aqua/releases/tag/0.3.0) – September 8, 2021
|
||||||
|
|
||||||
|
* TypeScript output of the compiler now targets a completely rewritten [TypeScript SDK](https://doc.fluence.dev/docs/js-sdk) ([#251](https://github.com/fluencelabs/aqua/pull/251))
|
||||||
|
* Constants are now `UPPER_CASED`, including always-available `HOST_PEER_ID` and `INIT_PEER_ID` ([#260](https://github.com/fluencelabs/aqua/pull/260))
|
||||||
|
* The compiler is now distributed as [@fluencelabs/aqua](https://www.npmjs.com/package/@fluencelabs/aqua) package (was `aqua-cli`) ([#278](https://github.com/fluencelabs/aqua/pull/278))
|
||||||
|
* `aqua` is the name of the compiler CLI command now (was `aqua-cli`) ([#278](https://github.com/fluencelabs/aqua/pull/278))
|
||||||
|
* JVM version of the compiler is now available with `aqua-j` command; JS build is called by default – so no more need to have JVM installed ([#278](https://github.com/fluencelabs/aqua/pull/278))
|
||||||
|
* Now you can have a file that contains only a header with imports, uses, declares, and exports, and no new definitions ([#274](https://github.com/fluencelabs/aqua/pull/274))
|
||||||
|
|
||||||
|
### [0.2.1](https://github.com/fluencelabs/aqua/releases/tag/0.2.1) – August 31, 2021
|
||||||
|
|
||||||
|
* Javascript build of the compiler is now distributed via NPM: to run without Java, use `aqua-js` command ([#256](https://github.com/fluencelabs/aqua/pull/256))
|
||||||
|
* Now dots are allowed in the module declarations: `module Space.Module` & many bugfixes ([#258](https://github.com/fluencelabs/aqua/pull/258))
|
||||||
|
|
||||||
|
### [0.2.0](https://github.com/fluencelabs/aqua/releases/tag/0.2.0) – August 27, 2021
|
||||||
|
|
||||||
|
* Now the compiler emits AIR with the new `(ap` instruction, hence it's not backwards compatible ([#241](https://github.com/fluencelabs/aqua/pull/241))
|
||||||
|
* Many performance optimizations and bugfixes ([#255](https://github.com/fluencelabs/aqua/pull/255), [#254](https://github.com/fluencelabs/aqua/pull/254), [#252](https://github.com/fluencelabs/aqua/pull/252), [#249](https://github.com/fluencelabs/aqua/pull/249))
|
||||||
|
|
||||||
|
### [0.1.14](https://github.com/fluencelabs/aqua/releases/tag/0.1.14) – August 20, 2021
|
||||||
|
|
||||||
|
* Aqua file header changes: `module`, `declares`, `use`, `export` expressions ([#245](https://github.com/fluencelabs/aqua/pull/245)), see [Imports and Exports](language/header/) for the docs. 
|
||||||
|
* Experimental Scala.js build of the compiler ([#247](https://github.com/fluencelabs/aqua/pull/247))
|
||||||
|
|
||||||
|
### [0.1.13](https://github.com/fluencelabs/aqua/releases/tag/0.1.13) – August 10, 2021
|
||||||
|
|
||||||
|
* Functions can export (return) several values, see [#229](https://github.com/fluencelabs/aqua/pull/229)
|
||||||
|
* Internal changes: migrate to Scala3 ([#228](https://github.com/fluencelabs/aqua/pull/228)), added Product type ([#168](https://github.com/fluencelabs/aqua/pull/225))
|
||||||
|
|
||||||
|
### [0.1.12](https://github.com/fluencelabs/aqua/releases/tag/0.1.12) – August 4, 2021
|
||||||
|
|
||||||
|
* Can have functions consisting of a return operand only, returning a literal or an argument
|
||||||
|
|
||||||
|
### [0.1.11](https://github.com/fluencelabs/aqua/releases/tag/0.1.11) – August 3, 2021
|
||||||
|
|
||||||
|
* Added `host_peer_id` , a predefined constant that points on the relay if Aqua compilation is configured so, and on `%init_peer_id%` otherwise, see [#218](https://github.com/fluencelabs/aqua/issues/218).
|
||||||
|
|
||||||
|
### 0.1.10 – July 26, 2021
|
||||||
|
|
||||||
|
* Added `<<-` operator to push a value into a stream, see #[214](https://github.com/fluencelabs/aqua/pull/214), [#209](https://github.com/fluencelabs/aqua/issues/209).
|
||||||
|
|
5
getting-started/README.md
Normal file
5
getting-started/README.md
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
# Getting Started
|
||||||
|
|
||||||
|
[Aqua](https://github.com/fluencelabs/aqua), part of Fluence Lab's Aquamarine Web3 stack, is a purpose-built language to program peer-to-peer networks and compose distributed services hosted on peer-to-peer nodes into applications and backends.
|
||||||
|
|
||||||
|
In addition to the language specification, Aqua provides a compiler, which produces Aqua Intermediary Representation (AIR) and an execution stack, Aqua VM, that is part of every Fluence node implementation to execute AIR.
|
28
getting-started/installation.md
Normal file
28
getting-started/installation.md
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
# Installation
|
||||||
|
|
||||||
|
Both the Aqua compiler and support library can be installed natively with `npm`
|
||||||
|
|
||||||
|
To install the compiler:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm -g install @fluencelabs/aqua
|
||||||
|
```
|
||||||
|
|
||||||
|
and to make the Aqua library available to Typescript applications:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm -g install @fluencelabs/aqua-lib
|
||||||
|
```
|
||||||
|
|
||||||
|
Moreover, a VSCode syntax-highlighting extension is available. In VSCode, click on the Extensions button, search for `aqua`and install the extension.
|
||||||
|
|
||||||
|
![Aqua Extension for VSCode](<../.gitbook/assets/Screen Shot 2021-06-29 at 1.06.39 PM.png>)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
103
getting-started/quick-start.md
Normal file
103
getting-started/quick-start.md
Normal file
@ -0,0 +1,103 @@
|
|||||||
|
# Quick Start
|
||||||
|
|
||||||
|
Every Fluence reference node comes with a set of builtin services that are accessible to Aqua programs. Let's use those readily available services to get the timestamp of a few of our peer-to-peer neighbourhood nodes with Aqua.
|
||||||
|
|
||||||
|
{% tabs %}
|
||||||
|
{% tab title="Timestamps With Aqua" %}
|
||||||
|
```haskell
|
||||||
|
-- timestamp_getter.aqua
|
||||||
|
-- bring the builtin services into scope
|
||||||
|
import "@fluencelabs/aqua-lib/builtin.aqua"
|
||||||
|
|
||||||
|
-- create an identity service to join our results
|
||||||
|
service Op2("op"):
|
||||||
|
identity(s: u64)
|
||||||
|
array(a: string, b: u64) -> string
|
||||||
|
|
||||||
|
-- function to get ten timestamps from our Kademlia
|
||||||
|
-- neighborhood peers and return as an array of u64 timestamps
|
||||||
|
-- the function arguement node is our peer id
|
||||||
|
func ts_getter(node: string) -> []u64:
|
||||||
|
-- create a streaming variable
|
||||||
|
res: *u64
|
||||||
|
-- execute on the pecified peer
|
||||||
|
on node:
|
||||||
|
-- get the base58 representation of the peer id
|
||||||
|
k <- Op.string_to_b58(node)
|
||||||
|
-- find all (default 20) neighborhood peers from k
|
||||||
|
nodes <- Kademlia.neighborhood(k, nil, nil)
|
||||||
|
-- for each peer in our neighborhood and in parallel
|
||||||
|
for n <- nodes par:
|
||||||
|
on n:
|
||||||
|
-- try and get the peer's timestamp
|
||||||
|
try:
|
||||||
|
res <- Peer.timestamp_ms()
|
||||||
|
-- flatten nine of our joined results
|
||||||
|
Op2.identity(res!9)
|
||||||
|
-- return an array of ten timestamps
|
||||||
|
<- res
|
||||||
|
```
|
||||||
|
{% endtab %}
|
||||||
|
{% endtabs %}
|
||||||
|
|
||||||
|
The Aqua script essentially creates a workflow originating from our client peer to enumerate neighbor peers for our reference node, calls on the builtin timestamp service on each peer in parallel, joins the results stream after we collect ten timestamps and return our u64 array of timestamps back to the client peer. 
|
||||||
|
|
||||||
|
 See the [ts-oracle example](https://github.com/fluencelabs/examples/tree/d52f06dfc3d30799fe6bd8e3e602c8ea1d1b8e8a/aqua-examples/ts-oracle) for the corresponding Aqua files in the `aqua-script` directory. Now that we have our script, let's compile it with the `aqua-cli` tool and find our AIR file in the `air-scripts` directory:
|
||||||
|
|
||||||
|
{% tabs %}
|
||||||
|
{% tab title="Compile" %}
|
||||||
|
```bash
|
||||||
|
aqua -i aqua-scripts -o air-scripts -a
|
||||||
|
```
|
||||||
|
{% endtab %}
|
||||||
|
|
||||||
|
{% tab title="Result" %}
|
||||||
|
```bash
|
||||||
|
# in the air-script dir you should have the following file
|
||||||
|
timestamp_getter.ts_getter.air
|
||||||
|
```
|
||||||
|
{% endtab %}
|
||||||
|
{% endtabs %}
|
||||||
|
|
||||||
|
Once we have our AIR file we can either use a Typescript or command line client. Let's use the command line client `aqua`see third tab for installation instructions, if needed:
|
||||||
|
|
||||||
|
{% tabs %}
|
||||||
|
{% tab title="Run Air scripts" %}
|
||||||
|
```bash
|
||||||
|
# use `aqua run` as your client with some peer id
|
||||||
|
aqua run \
|
||||||
|
-a /dns4/kras-02.fluence.dev/tcp/19001/wss/p2p/12D3KooWHLxVhUQyAuZe6AHMB29P7wkvTNMn7eDMcsqimJYLKREf \
|
||||||
|
-i aqua-scripts/timestamp_getter.aqua \
|
||||||
|
-f 'ts_getter("12D3KooWHLxVhUQyAuZe6AHMB29P7wkvTNMn7eDMcsqimJYLKREf")'
|
||||||
|
```
|
||||||
|
{% endtab %}
|
||||||
|
|
||||||
|
{% tab title="Result" %}
|
||||||
|
```bash
|
||||||
|
# here we go: ten timestamps in micro seconds obtained in parallel
|
||||||
|
[
|
||||||
|
[
|
||||||
|
1624928596292,
|
||||||
|
1624928596291,
|
||||||
|
1624928596291,
|
||||||
|
1624928596299,
|
||||||
|
1624928596295,
|
||||||
|
1624928596286,
|
||||||
|
1624928596295,
|
||||||
|
1624928596284,
|
||||||
|
1624928596293,
|
||||||
|
1624928596289
|
||||||
|
]
|
||||||
|
]
|
||||||
|
```
|
||||||
|
{% endtab %}
|
||||||
|
|
||||||
|
{% tab title="Installing aqua" %}
|
||||||
|
```bash
|
||||||
|
# if you don't have `aqua` in your setup:
|
||||||
|
npm -g install @fluencelabs/aqua
|
||||||
|
```
|
||||||
|
{% endtab %}
|
||||||
|
{% endtabs %}
|
||||||
|
|
||||||
|
And that's it. We now have ten timestamps right from our selected peer's neighbors.
|
55
language/README.md
Normal file
55
language/README.md
Normal file
@ -0,0 +1,55 @@
|
|||||||
|
# Language
|
||||||
|
|
||||||
|
Aqua is a language for distributed workflow coordination in p2p networks.
|
||||||
|
|
||||||
|
It's structured with significant indentation.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Comments begin with double-dash and end with the line (inline)
|
||||||
|
func foo(): -- Comments are allowed almost everywhere
|
||||||
|
-- Body of the block expression is indented
|
||||||
|
bar(5)
|
||||||
|
```
|
||||||
|
|
||||||
|
Values in Aqua have types, which are designated by a colon, `:`, as seen in the function signature below. The type of a return, which is yielded when a function is executed, is denoted by an arrow pointing to the right `->` , whereas yielding is denoted by an arrow pointing to the left `<-`.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Define a function that yields a string
|
||||||
|
func bar(arg: i16) -> string:
|
||||||
|
-- Call a function
|
||||||
|
smth(arg)
|
||||||
|
|
||||||
|
-- Yield a value from a function
|
||||||
|
x <- smth(arg)
|
||||||
|
|
||||||
|
-- Return a yielded results from a function
|
||||||
|
<- "return literal"
|
||||||
|
```
|
||||||
|
|
||||||
|
Subsequent sections explain the main parts of Aqua.
|
||||||
|
|
||||||
|
Data:
|
||||||
|
|
||||||
|
* [Types](types.md)
|
||||||
|
* [Values of that types](variables.md)
|
||||||
|
|
||||||
|
Execution:
|
||||||
|
|
||||||
|
* [Topology](topology.md) – how to express where the code should be executed
|
||||||
|
* [Execution flow](flow/) – control structures
|
||||||
|
|
||||||
|
Computations:
|
||||||
|
|
||||||
|
* [Abilities & Services](abilities-and-services.md)
|
||||||
|
|
||||||
|
Advanced parallelism:
|
||||||
|
|
||||||
|
* [CRDT Streams](crdt-streams.md)
|
||||||
|
|
||||||
|
Code management:
|
||||||
|
|
||||||
|
* [Imports & exports](header/)
|
||||||
|
|
||||||
|
Reference:
|
||||||
|
|
||||||
|
* [Expressions](expressions/)
|
66
language/abilities-and-services.md
Normal file
66
language/abilities-and-services.md
Normal file
@ -0,0 +1,66 @@
|
|||||||
|
# Abilities & Services
|
||||||
|
|
||||||
|
While [Execution flow](flow/) organizes the flow from peer to peer, Abilities & Services describe what exactly can be called on these peers, and how to call it.
|
||||||
|
|
||||||
|
Ability is a concept of "what is possible in this context": like a peer-specific trait or a typeclass. It will be better explained once abilities passing is implemented.
|
||||||
|
|
||||||
|
{% embed url="https://github.com/fluencelabs/aqua/issues/33" %}
|
||||||
|
|
||||||
|
## Services
|
||||||
|
|
||||||
|
A Service interfaces functions (often provided via WebAssembly interface) executable on a peer. Example of service definition:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
service MyService:
|
||||||
|
foo(arg: string) -> string
|
||||||
|
bar() -> bool
|
||||||
|
baz(arg: i32)
|
||||||
|
```
|
||||||
|
|
||||||
|
Service functions in Aqua have no function body. Computations, of any complexity, are implemented with any programming language that fits, and then brought to the Aqua execution context. Aqua calls these functions but does not peak into what's going on inside.
|
||||||
|
|
||||||
|
### Built-in Services
|
||||||
|
|
||||||
|
Some services may be singletons available on all peers. Such services are called built-ins, and are always available in any scope.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Built-in service has a constant ID, so it's always resolved
|
||||||
|
service Op("op"):
|
||||||
|
noop()
|
||||||
|
|
||||||
|
func foo():
|
||||||
|
-- Call the noop function of "op" service locally
|
||||||
|
Op.noop()
|
||||||
|
```
|
||||||
|
|
||||||
|
### Service Resolution
|
||||||
|
|
||||||
|
A peer may host many services of the same type. To distinguish services from each other, Aqua requires Service resolution to be done: that means, the developer must provide an ID of the service to be used on the peer.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
service MyService:
|
||||||
|
noop()
|
||||||
|
|
||||||
|
func foo():
|
||||||
|
-- Will fail
|
||||||
|
MyService.noop()
|
||||||
|
|
||||||
|
-- Resolve MyService: it has id "noop"
|
||||||
|
MyService "noop"
|
||||||
|
|
||||||
|
-- Can use it now
|
||||||
|
MyService.noop()
|
||||||
|
|
||||||
|
on "other peer":
|
||||||
|
-- Should fail: we haven't resolved MyService ID on other peer
|
||||||
|
MyService.noop()
|
||||||
|
|
||||||
|
-- Resolve MyService on peer "other peer"
|
||||||
|
MyService "other noop"
|
||||||
|
MyService.noop()
|
||||||
|
|
||||||
|
-- Moved back to initial peer, here MyService is resolved to "noop"
|
||||||
|
MyService.noop()
|
||||||
|
```
|
||||||
|
|
||||||
|
There's no way to call an external function in Aqua without defining all the data types and the service type. One of the most convenient ways to do it is to generate Aqua types from Wasm code in Marine.
|
60
language/closures.md
Normal file
60
language/closures.md
Normal file
@ -0,0 +1,60 @@
|
|||||||
|
# Closures
|
||||||
|
|
||||||
|
In Aqua, you can create an arrow within the function, enclosing its context.
|
||||||
|
|
||||||
|
```python
|
||||||
|
service Hello:
|
||||||
|
say_hello(to_name: string, peer: string)
|
||||||
|
|
||||||
|
func bar(callback: string -> ()):
|
||||||
|
callback("Fish")
|
||||||
|
|
||||||
|
func foo(peer: string):
|
||||||
|
on peer:
|
||||||
|
-- Capture service resolution
|
||||||
|
Hello "world"
|
||||||
|
-- Create a closure named "closure"
|
||||||
|
closure = (name: string) -> string:
|
||||||
|
-- Use a value that's available on the definition site
|
||||||
|
-- To call a service that's resolved on the definition site
|
||||||
|
Hello.say_hello(name, peer)
|
||||||
|
-- Return a value from the closure; syntax is the same as in functions
|
||||||
|
<- name
|
||||||
|
-- Pass this closure to another function
|
||||||
|
bar(closure)
|
||||||
|
```
|
||||||
|
|
||||||
|
Closures can be created anywhere in the function, starting with Aqua 0.4.1, and then used just like any other arrow (argument of arrow type, function, or service method): passed to another function as an argument, or called right there.
|
||||||
|
|
||||||
|
Closures enclose over three domains:
|
||||||
|
|
||||||
|
* Values in scope,
|
||||||
|
* Service resolutions,
|
||||||
|
* Topology: place where the closure is defined should be the place where it's executed.
|
||||||
|
|
||||||
|
Comparing with functions, closures have one important difference: functions are detached from topology by default. `func` keyword can be used to bring this behavior to closures, if needed.
|
||||||
|
|
||||||
|
```python
|
||||||
|
service Hello:
|
||||||
|
say_hello()
|
||||||
|
|
||||||
|
func foo():
|
||||||
|
on HOST_PEER_ID:
|
||||||
|
Hello "hello"
|
||||||
|
|
||||||
|
-- This closure will execute on HOST_PEER_ID
|
||||||
|
closure = () -> ():
|
||||||
|
Hello.say_hello()
|
||||||
|
|
||||||
|
fn = func () -> ():
|
||||||
|
Hello.say_hello()
|
||||||
|
|
||||||
|
-- Will go to HOST_PEER_ID, where Hello service is resolved, and call say_hello
|
||||||
|
closure()
|
||||||
|
|
||||||
|
-- Will be called on current peer, probably INIT_PEER_ID, and may fail
|
||||||
|
-- in case Hello service is not defined or has another ID
|
||||||
|
fn()
|
||||||
|
```
|
||||||
|
|
||||||
|
It is not yet possible to return an arrow from an arrow.
|
137
language/crdt-streams.md
Normal file
137
language/crdt-streams.md
Normal file
@ -0,0 +1,137 @@
|
|||||||
|
# CRDT Streams
|
||||||
|
|
||||||
|
In Aqua, an ordinary value is a name that points to a single result:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
value <- foo()
|
||||||
|
```
|
||||||
|
|
||||||
|
A stream, on the other hand, is a name that points to zero or more results:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
values: *string
|
||||||
|
|
||||||
|
-- Write to a stream several times
|
||||||
|
values <- foo()
|
||||||
|
values <- foo()
|
||||||
|
|
||||||
|
-- A value can be pushed to a stream
|
||||||
|
-- without an explicit function call with <<- operator:
|
||||||
|
values <<- "foo"
|
||||||
|
x <- foo()
|
||||||
|
values <<- x
|
||||||
|
```
|
||||||
|
|
||||||
|
Stream is a kind of [collection](types.md#collection-types) and can be used in place of other collections:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
func foo(peer: string, relay: ?string):
|
||||||
|
on peer via relay:
|
||||||
|
Op.noop()
|
||||||
|
|
||||||
|
func bar(peer: string, relay: string):
|
||||||
|
relayMaybe: *string
|
||||||
|
if peer != %init_peer_id%:
|
||||||
|
-- Wwrite into a stream
|
||||||
|
relayMaybe <<- relay
|
||||||
|
-- Pass a stream as an optional value
|
||||||
|
foo(peer, relayMaybe)
|
||||||
|
```
|
||||||
|
|
||||||
|
But the most powerful use of streams pertains to their use with parallel execution, which incurs non-determinism.
|
||||||
|
|
||||||
|
## Streams: Lifecycle And Guarantees
|
||||||
|
|
||||||
|
A stream's lifecycle can be separated into three stages:
|
||||||
|
|
||||||
|
* Source: (Parallel) Writes to a stream
|
||||||
|
* Map: Handles the stream values
|
||||||
|
* Sink: Converts the resulting stream into a scalar
|
||||||
|
|
||||||
|
Consider the following example:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
alias PeerId: string
|
||||||
|
|
||||||
|
func foo(peers: []PeerId) -> string:
|
||||||
|
-- Store a list of peer IDs collected from somewhere
|
||||||
|
-- This is a stream (denoted by *, which means "0 or more values")
|
||||||
|
resp: *PeerId
|
||||||
|
|
||||||
|
-- Will go to all peers in parallel
|
||||||
|
for p <- peers par:
|
||||||
|
-- Move execution flow to the peer p
|
||||||
|
on p:
|
||||||
|
-- Get a peer ID from a service call (called on p)
|
||||||
|
resp <- Srv.call()
|
||||||
|
|
||||||
|
-- You can think of resp2 as a locally consistent lazy list
|
||||||
|
resp2: *PeerId
|
||||||
|
|
||||||
|
-- What is the value of resp at this point?
|
||||||
|
-- Keep an eye on the `par` there: actually, it's FORKing execution
|
||||||
|
-- to several branches on different peers.
|
||||||
|
for r <- resp par:
|
||||||
|
-- Move execution to peer r
|
||||||
|
on r:
|
||||||
|
-- Call Srv locally
|
||||||
|
resp2 <- Srv.call()
|
||||||
|
|
||||||
|
-- Wait for 6 responses on resp2: it's JOIN
|
||||||
|
Op.identity(resp2!5)
|
||||||
|
-- Once we have 5 responses, merge them
|
||||||
|
-- Function treats resp2 as an array of strings, and concatenates all
|
||||||
|
-- of them into a single string.
|
||||||
|
-- This function call "fixes" the content of resp2, making a single observation.
|
||||||
|
-- This is a "stream canonicalization" event: values, order, and length
|
||||||
|
-- is fixed at the moment of first function call, function will not be called
|
||||||
|
-- again, with different data.
|
||||||
|
r <- Srv.concat(resp2)
|
||||||
|
-- You can keep writing to a stream after it's value is used
|
||||||
|
|
||||||
|
<- r
|
||||||
|
```
|
||||||
|
|
||||||
|
In this case, for each peer `p` in `peers`, a new `PeerID` is going to be obtained from the `Srv.call` and written into the `resp` stream.
|
||||||
|
|
||||||
|
Every peer `p` in peers does not know anything about how the other iterations proceed.
|
||||||
|
|
||||||
|
Once `PeerId` is written to the `resp` stream, the second `for` is triggered. This is the mapping stage.
|
||||||
|
|
||||||
|
And then the results are sent to the first peer, to call Op.identity there. This Op.identity waits until element number 5 is defined on `resp2` stream.
|
||||||
|
|
||||||
|
When the join is complete, the stream is consumed by the concatenation service to produce a scalar value, which is returned.
|
||||||
|
|
||||||
|
During execution, involved peers have different views on the state of execution: each of the `for` parallel branches has no view or access to the other branches' data and eventually, the execution flows to the initial peer. The initial peer then merges writes to the `resp` stream and to the `resp2` stream, respectively. These writes are done in a conflict-free fashion. Furthermore, the respective heads of the `resp`, `resp2` streams will not change from each peer's point of view as they are immutable and new values can only be appended. However, different peers may have a different order of the stream values depending on the order of receiving these values.
|
||||||
|
|
||||||
|
### Stream restrictions
|
||||||
|
|
||||||
|
Restriction is a part of π calculus that bounds (restricts) a name to a scope. For Aqua streams it means that the stream is not accessible outside of definition scope, and the stream is always fresh when execution enters the scope.
|
||||||
|
|
||||||
|
These behaviors are introduced in [Aqua 0.5](https://github.com/fluencelabs/aqua/releases/tag/0.5.0).
|
||||||
|
|
||||||
|
```python
|
||||||
|
-- Note: returns []string (immutable, readonly collection), not *string
|
||||||
|
func smth(xs: []string) -> []string:
|
||||||
|
-- This stream is available within the function, and is empty
|
||||||
|
outside: *string
|
||||||
|
for x <- xs:
|
||||||
|
-- This stream is empty at the beginning of each iteration
|
||||||
|
inside: *string
|
||||||
|
|
||||||
|
-- We can manipulate the inside stream within the scope
|
||||||
|
if x == "ok"
|
||||||
|
inside <<- "ok"
|
||||||
|
else:
|
||||||
|
inside <<- "not ok"
|
||||||
|
|
||||||
|
-- Read the value
|
||||||
|
if inside! == "ok":
|
||||||
|
outside <<- "result"
|
||||||
|
|
||||||
|
-- outside stream is not escaping this function scope:
|
||||||
|
-- it is converted to an array (canonicalized) and cannot be appended after that
|
||||||
|
<- outside
|
||||||
|
```
|
||||||
|
|
||||||
|
You still can keep streams as streams by using them as `*string` arguments, or by returning them as `*string`. 
|
17
language/expressions/README.md
Normal file
17
language/expressions/README.md
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
# Expressions
|
||||||
|
|
||||||
|
Aqua file consists of a header and a body.
|
||||||
|
|
||||||
|
###
|
||||||
|
|
||||||
|
### Body expressions
|
||||||
|
|
||||||
|
`func`
|
||||||
|
|
||||||
|
Function definition is the most crucial part of the language, see [Functions](functions.md) for details.
|
||||||
|
|
||||||
|
``
|
||||||
|
|
||||||
|
{% embed url="https://github.com/fluencelabs/aqua/tree/main/semantics/src/main/scala/aqua/semantics/expr" %}
|
||||||
|
Expressions source code
|
||||||
|
{% endembed %}
|
28
language/expressions/functions.md
Normal file
28
language/expressions/functions.md
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
# Functions
|
||||||
|
|
||||||
|
A function in Aqua is a block of code expressing a workflow, i.e., a coordination scenario that works across one or more peers.
|
||||||
|
|
||||||
|
A function may take arguments of any type and may return a value.
|
||||||
|
|
||||||
|
A function can call other functions, take functions as arguments of [arrow type](../types.md#arrow-types) and be provided as an arrow argument.
|
||||||
|
|
||||||
|
Essentially, a function is an arrow. The function call is an expression that connects named arguments to an arrow, and gives a name to the result.
|
||||||
|
|
||||||
|
Finally, all a function does is call its arguments or service functions.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
service MySrv:
|
||||||
|
foo()
|
||||||
|
|
||||||
|
func do_something(): -- arrow of type: -> ()
|
||||||
|
MySrv "srv id"
|
||||||
|
MySrv.foo()
|
||||||
|
```
|
||||||
|
|
||||||
|
{% hint style="warning" %}
|
||||||
|
TODO
|
||||||
|
|
||||||
|
* list all expressions
|
||||||
|
* for each, explain the contract and provide a use case
|
||||||
|
{% endhint %}
|
||||||
|
|
15
language/expressions/header.md
Normal file
15
language/expressions/header.md
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
# Header
|
||||||
|
|
||||||
|
## Header expressions
|
||||||
|
|
||||||
|
### `import`
|
||||||
|
|
||||||
|
The `import` expression brings everything defined within the imported file into the scope.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
import "path/to/file.aqua"
|
||||||
|
```
|
||||||
|
|
||||||
|
The to be imported file path is first resolved relative to the source file path followed by checking for an `-imports` directories.
|
||||||
|
|
||||||
|
See [Imports & Exports](../header/) for details.
|
21
language/expressions/overrideable-constants.md
Normal file
21
language/expressions/overrideable-constants.md
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
---
|
||||||
|
description: Static configuration pieces that affect compilation
|
||||||
|
---
|
||||||
|
|
||||||
|
# Overrideable constants
|
||||||
|
|
||||||
|
### `const`
|
||||||
|
|
||||||
|
Constant definition.
|
||||||
|
|
||||||
|
Constants can be used all across functions, exported and imported. If a constant is defined using `?=` , it can be overriden by value via compiler flags or imported values.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- This can be overriten with -const "target_peer_id = \"other peer id\""
|
||||||
|
const target_peer_id ?= "this is a target peer id"
|
||||||
|
|
||||||
|
-- This constant cannot be overriden
|
||||||
|
const service_id = "service id"
|
||||||
|
```
|
||||||
|
|
||||||
|
You can assign only literals to constants. Constant type is the same as literal type. You can override only with a subtype of that literal type.
|
42
language/expressions/services.md
Normal file
42
language/expressions/services.md
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
# Services
|
||||||
|
|
||||||
|
### `service`
|
||||||
|
|
||||||
|
Service definition.
|
||||||
|
|
||||||
|
A service is a program running on a peer. Every service has an interface that consists of a list of functions. To call a service, the service must be identified: so, every service has an ID that must be resolved in the peer scope.
|
||||||
|
|
||||||
|
In the service definition, you enumerate all the functions, their names, argument, and return types, and optionally provide the default Service ID.
|
||||||
|
|
||||||
|
Services that are a part of the protocol, i.e. are available from the peer node, come along with IDs. Example of predefined service:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
service Peer("peer"):
|
||||||
|
foo() -- no arguments, no return
|
||||||
|
bar(i: bool) -> bool
|
||||||
|
|
||||||
|
func usePeer() -> bool:
|
||||||
|
Peer.foo() -- results in a call of service "peer", function "foo", on current peer ID
|
||||||
|
z <- Peer.bar(true)
|
||||||
|
<- z
|
||||||
|
```
|
||||||
|
|
||||||
|
Example of a custom service:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
service MyService:
|
||||||
|
foo()
|
||||||
|
bar(i: bool, z: i32) -> string
|
||||||
|
|
||||||
|
func useMyService(k: i32) -> string:
|
||||||
|
-- Need to tell the compiler what does "my service" mean in this scope
|
||||||
|
MyService "my service id"
|
||||||
|
MyService.foo()
|
||||||
|
on "another peer id":
|
||||||
|
-- Need to redefine MyService in scope of this peer as well
|
||||||
|
MyService "another service id"
|
||||||
|
z <- MyService.bar(false, k)
|
||||||
|
<- z
|
||||||
|
```
|
||||||
|
|
||||||
|
Service definitions have types. Type of a service is a product type of arrows. See [Types](../types.md#type-of-a-service-and-a-file).
|
23
language/expressions/type-definitions.md
Normal file
23
language/expressions/type-definitions.md
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
# Type definitions
|
||||||
|
|
||||||
|
### `data`
|
||||||
|
|
||||||
|
[Product type](../types.md#products) definition. See [Types](../types.md) for details.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
data SomeType:
|
||||||
|
fieldName: FieldType
|
||||||
|
otherName: OtherType
|
||||||
|
third: []u32
|
||||||
|
```
|
||||||
|
|
||||||
|
### `alias`
|
||||||
|
|
||||||
|
Aliasing a type to a name.
|
||||||
|
|
||||||
|
It may help with self-documented code and refactoring.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
alias PeerId: string
|
||||||
|
alias MyDomain: DomainType
|
||||||
|
```
|
11
language/flow/README.md
Normal file
11
language/flow/README.md
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
# Execution flow
|
||||||
|
|
||||||
|
Aqua's main goal is to express how the execution flows: moves from peer to peer, forks to parallel flows and then joins back, uses data from one step in another.
|
||||||
|
|
||||||
|
As the foundation of Aqua is based on π-calculus, finally flow is decomposed into [sequential](sequential.md) (`seq`, `.`), [conditional](conditional.md) (`xor`, `+`), [parallel](parallel.md) (`par`, `|`) computations and [iterations](iterative.md) based on data (`!P`). For each basic way to organize the flow, Aqua follows a set of rules to execute the operations:
|
||||||
|
|
||||||
|
* What data is available for use?
|
||||||
|
* What data is exported and can be used below?
|
||||||
|
* How errors and failures are handled?
|
||||||
|
|
||||||
|
These rules form a contract, as in [design-by-contract](https://en.wikipedia.org/wiki/Design\_by\_contract) programming.
|
122
language/flow/conditional.md
Normal file
122
language/flow/conditional.md
Normal file
@ -0,0 +1,122 @@
|
|||||||
|
# Conditional
|
||||||
|
|
||||||
|
Aqua supports branching: you can return one value or another, recover from the error, or check a boolean expression.
|
||||||
|
|
||||||
|
## Contract
|
||||||
|
|
||||||
|
* The second arm of the conditional operator is executed if and only if the first arm failed.
|
||||||
|
* The second arm has no access to the first arm's data.
|
||||||
|
* A conditional block is considered "executed" if and only if any arm was executed successfully.
|
||||||
|
* A conditional block is considered "failed" if and only if the second (recovery) arm fails to execute.
|
||||||
|
|
||||||
|
## Conditional operations
|
||||||
|
|
||||||
|
### try
|
||||||
|
|
||||||
|
Tries to perform operations, or swallows the error (if there's no catch, otherwise after the try block).
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
try:
|
||||||
|
-- If foo fails with an error, execution will continue
|
||||||
|
-- You should write your logic in a non-blocking fashion:
|
||||||
|
-- If your code below depends on `x`, it may halt as `x` is not resolved.
|
||||||
|
-- See Conditional return below for workaround
|
||||||
|
x <- foo()
|
||||||
|
```
|
||||||
|
|
||||||
|
### catch
|
||||||
|
|
||||||
|
Catches the standard error from `try` block.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
try:
|
||||||
|
foo()
|
||||||
|
catch e:
|
||||||
|
logError(e)
|
||||||
|
```
|
||||||
|
|
||||||
|
Type of `e` is:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
data LastError:
|
||||||
|
instruction: string -- What AIR instruction failed
|
||||||
|
msg: string -- Human-readable error message
|
||||||
|
peer_id: string -- On what peer the error happened
|
||||||
|
```
|
||||||
|
|
||||||
|
### if
|
||||||
|
|
||||||
|
If corresponds to `match`, `mismatch` extension of π-calculus.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
x = true
|
||||||
|
if x:
|
||||||
|
-- always executed
|
||||||
|
foo()
|
||||||
|
|
||||||
|
if x == false:
|
||||||
|
-- never executed
|
||||||
|
bar()
|
||||||
|
|
||||||
|
if x != false:
|
||||||
|
-- executed
|
||||||
|
baz()
|
||||||
|
```
|
||||||
|
|
||||||
|
Currently, you may only use one `==`, `!=` operator in the `if` expression, or compare with true.
|
||||||
|
|
||||||
|
Both operands can be variables.
|
||||||
|
|
||||||
|
### else
|
||||||
|
|
||||||
|
Just the second branch of `if`, in case the condition does not hold.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
if true:
|
||||||
|
foo()
|
||||||
|
else:
|
||||||
|
bar()
|
||||||
|
```
|
||||||
|
|
||||||
|
If you want to set a variable based on condition, see Conditional return.
|
||||||
|
|
||||||
|
### otherwise
|
||||||
|
|
||||||
|
You may add `otherwise` to provide recovery for any block or expression:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
x <- foo()
|
||||||
|
otherwise:
|
||||||
|
-- if foo can't be executed, then do bar()
|
||||||
|
y <- bar()
|
||||||
|
```
|
||||||
|
|
||||||
|
## Conditional return
|
||||||
|
|
||||||
|
In Aqua, functions may have only one return expression, which is very last. And conditional expressions cannot define the same variable:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
try:
|
||||||
|
x <- foo()
|
||||||
|
otherwise:
|
||||||
|
x <- bar() -- Error: name x was already defined in scope, can't compile
|
||||||
|
```
|
||||||
|
|
||||||
|
So to get the value based on condition, we need to use a [writeable collection](../types.md#collection-types).
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- result may have 0 or more values of type string, and is writeable
|
||||||
|
resultBox: *string
|
||||||
|
try:
|
||||||
|
resultBox <- foo()
|
||||||
|
otherwise:
|
||||||
|
resultBox <- bar()
|
||||||
|
|
||||||
|
-- now result contains only one value, let's extract it!
|
||||||
|
result = resultBox!
|
||||||
|
|
||||||
|
-- Type of result is string
|
||||||
|
-- Please note that if there were no writes to resultBox,
|
||||||
|
-- the first use of result will halt.
|
||||||
|
-- So you need to be careful about it and ensure that there's always a value.
|
||||||
|
```
|
81
language/flow/iterative.md
Normal file
81
language/flow/iterative.md
Normal file
@ -0,0 +1,81 @@
|
|||||||
|
# Iterative
|
||||||
|
|
||||||
|
π-calculus has a notion of the repetitive process: `!P = P | !P`. That means, you can always fork a new `P` process if you need it.
|
||||||
|
|
||||||
|
In Aqua, two operations correspond to it: you can call a service function (it's just available when it's needed), and you can use `for` loop to iterate on collections.
|
||||||
|
|
||||||
|
### `for` expression
|
||||||
|
|
||||||
|
In short, `for` looks like the following:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
xs: []string
|
||||||
|
|
||||||
|
for x <- xs:
|
||||||
|
y <- foo(x)
|
||||||
|
|
||||||
|
-- x and y are not accessible there, you can even redefine them
|
||||||
|
x <- bar()
|
||||||
|
y <- baz()
|
||||||
|
```
|
||||||
|
|
||||||
|
## Contract
|
||||||
|
|
||||||
|
* Iterations of `for` loop are executed sequentially by default.
|
||||||
|
* Variables defined inside `for` loop are not available outside.
|
||||||
|
* `for` loop's code has access to all variables above.
|
||||||
|
* `for` can be executed on a variable of any [Collection type](../types.md#collection-types).
|
||||||
|
|
||||||
|
### Conditional `for`
|
||||||
|
|
||||||
|
For can be executed on a variable of any [Collection type](../types.md#collection-types). You can make several trials in a loop, and break once any trial succeeded.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
xs: []string
|
||||||
|
|
||||||
|
for x <- xs try:
|
||||||
|
-- Will stop trying once foo succeeds
|
||||||
|
foo(x)
|
||||||
|
```
|
||||||
|
|
||||||
|
The contract is changed as in [Parallel](parallel.md#contract) flow.
|
||||||
|
|
||||||
|
### Parallel `for`
|
||||||
|
|
||||||
|
Running many operations in parallel is the most commonly used pattern for `for`.
|
||||||
|
|
||||||
|
```
|
||||||
|
xs: []string
|
||||||
|
|
||||||
|
for x <- xs par:
|
||||||
|
on x:
|
||||||
|
foo()
|
||||||
|
|
||||||
|
-- Once the fastest x succeeds, execution continues
|
||||||
|
-- If you want to make the subsequent execution independent from for,
|
||||||
|
-- mark it with par, e.g.:
|
||||||
|
par continueWithBaz()
|
||||||
|
```
|
||||||
|
|
||||||
|
The contract is changed as in [Conditional](conditional.md#contract) flow.
|
||||||
|
|
||||||
|
### Export data from `for`
|
||||||
|
|
||||||
|
The way to export data from `for` is the same as in [Conditional return](conditional.md#conditional-return) and [Race patterns](parallel.md#join-behavior).
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
xs: []string
|
||||||
|
return: *string
|
||||||
|
|
||||||
|
-- can be par, try, or nothing
|
||||||
|
for x <- xs par:
|
||||||
|
on x:
|
||||||
|
return <- foo()
|
||||||
|
|
||||||
|
-- Wait for 6 fastest results -- see Join behavior
|
||||||
|
baz(return!5, return)
|
||||||
|
```
|
||||||
|
|
||||||
|
### `for` on streams
|
||||||
|
|
||||||
|
`for` on streams is one of the most advanced and powerful parts of Aqua. See [CRDT streams](../crdt-streams.md) for details.
|
254
language/flow/parallel.md
Normal file
254
language/flow/parallel.md
Normal file
@ -0,0 +1,254 @@
|
|||||||
|
# Parallel
|
||||||
|
|
||||||
|
Parallel execution is where Aqua fully shines.
|
||||||
|
|
||||||
|
## Contract
|
||||||
|
|
||||||
|
* Parallel arms have no access to each other's data. Sync points must be explicit (see [Join behavior](parallel.md#join-behavior)).
|
||||||
|
* If any arm is executed successfully, the flow execution continues.
|
||||||
|
* All the data defined in parallel arms is available in the subsequent code.
|
||||||
|
|
||||||
|
## Implementation limitation
|
||||||
|
|
||||||
|
Parallel execution has some implementation limitations:
|
||||||
|
|
||||||
|
* Parallel means independent execution on different peers
|
||||||
|
* No parallelism when executing a script on a single peer
|
||||||
|
* No concurrency in services: every service instance does only one job simultaneously.
|
||||||
|
* Keep services small in terms of computation and memory (WebAssembly limitation)
|
||||||
|
|
||||||
|
These limitations might be overcome in future Aqua updates. But for now, plan your application design having this in mind.
|
||||||
|
|
||||||
|
## Parallel operations
|
||||||
|
|
||||||
|
#### par
|
||||||
|
|
||||||
|
`par` syntax is derived from π-calculus notation of parallelism: `A | B`
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- foo and bar will be executed in parallel, if possible
|
||||||
|
foo()
|
||||||
|
par bar()
|
||||||
|
|
||||||
|
-- It's useful to combine `par` with `on` block,
|
||||||
|
-- to delegate further execution to different peers.
|
||||||
|
|
||||||
|
-- In this case execution will continue on two peers, independently
|
||||||
|
on "peer 1":
|
||||||
|
x <- foo()
|
||||||
|
par on "peer 2":
|
||||||
|
y <- bar()
|
||||||
|
|
||||||
|
-- Once any of the previous functions return x or y,
|
||||||
|
-- execution continues. We don't know the order, so
|
||||||
|
-- if y is returned first, hello(x) will not execute
|
||||||
|
hello(x)
|
||||||
|
hello(y)
|
||||||
|
|
||||||
|
-- You can fix it with par
|
||||||
|
-- What's comes faster, will advance the execution flow
|
||||||
|
hello(x)
|
||||||
|
par hello(y)
|
||||||
|
```
|
||||||
|
|
||||||
|
`par` works in an infix manner between the previously stated function and the next one.
|
||||||
|
|
||||||
|
### co
|
||||||
|
|
||||||
|
`co` , short for `coroutine`, prefixes an operation to send it to the background. From π-calculus perspective, it's the same as `A | null`, where `null`-process is the one that does nothing and completes instantly.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Let's send foo to background and continue
|
||||||
|
co foo()
|
||||||
|
|
||||||
|
-- Do something on another peer, not blocking the flow on this one
|
||||||
|
co on "some peer":
|
||||||
|
baz()
|
||||||
|
|
||||||
|
-- This foo does not wait for baz()
|
||||||
|
foo()
|
||||||
|
|
||||||
|
-- Assume that foo is executed on another machine
|
||||||
|
co try:
|
||||||
|
x <- foo()
|
||||||
|
-- bar will not wait for foo to be executed or even launched
|
||||||
|
bar()
|
||||||
|
-- bax will wait for foo to complete
|
||||||
|
-- if foo failed, x will never resolve
|
||||||
|
-- and bax will never execute
|
||||||
|
bax(x)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Join behavior
|
||||||
|
|
||||||
|
Join means that data was created by different parallel execution flows and then used on a single peer to perform computations. It works the same way for any parallel blocks, be it `par`, `co` or something else (`for par`).
|
||||||
|
|
||||||
|
In Aqua, you can refer to previously defined variables. In case of sequential computations, they are available, if execution not failed:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Start execution somewhere
|
||||||
|
on peer1:
|
||||||
|
-- Go to peer1, execute foo, remember x
|
||||||
|
x <- foo()
|
||||||
|
|
||||||
|
-- x is available at this point
|
||||||
|
|
||||||
|
on peer2:
|
||||||
|
-- Go to peer2, execute bar, remember y
|
||||||
|
y <- bar()
|
||||||
|
|
||||||
|
-- Both x and y are available at this point
|
||||||
|
-- Use them in a function
|
||||||
|
baz(x, y)
|
||||||
|
```
|
||||||
|
|
||||||
|
Let's make this script parallel: execute `foo` and `bar` on different peers in parallel, then use both to compute `baz`.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Start execution somewhere
|
||||||
|
on peer1:
|
||||||
|
-- Go to peer1, execute foo, remember x
|
||||||
|
x <- foo()
|
||||||
|
|
||||||
|
-- Notice par on the next line: it means, go to peer2 in parallel with peer1
|
||||||
|
|
||||||
|
par on peer2:
|
||||||
|
-- Go to peer2, execute bar, remember y
|
||||||
|
y <- bar()
|
||||||
|
|
||||||
|
-- Remember the contract: either x or y is available at this point
|
||||||
|
-- As it's enough to execute just one branch to advance further
|
||||||
|
baz(x, y)
|
||||||
|
```
|
||||||
|
|
||||||
|
What will happen when execution comes to `baz`?
|
||||||
|
|
||||||
|
Actually, the script will be executed twice: the first time it will be sent from `peer1`, and the second time – from `peer2`. Or another way round: `peer2` then `peer1`, we don't know who is faster.
|
||||||
|
|
||||||
|
When execution will get to `baz` for the first time, Aqua VM will realize that it lacks some data that is expected to be computed above in the parallel branch. And halt.
|
||||||
|
|
||||||
|
After the second branch executes, VM will be woken up again, reach the same piece of code and realize that now it has enough data to proceed.
|
||||||
|
|
||||||
|
This way you can express race (see [Collection types](../types.md#collection-types) and [Conditional return](conditional.md#conditional-return) for other uses of this pattern):
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Initiate a stream to write into it several times
|
||||||
|
results: *string
|
||||||
|
|
||||||
|
on peer1:
|
||||||
|
results <- foo()
|
||||||
|
|
||||||
|
par on peer2:
|
||||||
|
results <- bar()
|
||||||
|
|
||||||
|
-- When any result is returned, take the first (the fastest) to proceed
|
||||||
|
baz(results!)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Explicit join expression
|
||||||
|
|
||||||
|
Consider the case when you want to wait for a certain amount of results computed in parallel, and then return.
|
||||||
|
|
||||||
|
```python
|
||||||
|
func collectRespondingPeers(peers: []string, n: i16) -> *string:
|
||||||
|
responded: *string
|
||||||
|
for p <- peers par:
|
||||||
|
responded <<- p
|
||||||
|
-- ...
|
||||||
|
```
|
||||||
|
|
||||||
|
How to return no less than `n+1` responsible peers?
|
||||||
|
|
||||||
|
{% hint style="warning" %}
|
||||||
|
Keep in mind that indices start from `0`.
|
||||||
|
|
||||||
|
If the expected length of a stream equals `n`, and you wait for element `stream[n]`, your code will hang forever, as it exceeds the length!
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
One way is to use a useless stream:
|
||||||
|
|
||||||
|
```python
|
||||||
|
useless: *string
|
||||||
|
useless <<- responded[n]
|
||||||
|
<- responded
|
||||||
|
```
|
||||||
|
|
||||||
|
Actually `useless` stream is useless, we create it just to push the nth element into it. However, it forces waiting for `responded[n`] to be available. When `responded` is returned, it will be at least of length `n+1` or longer.
|
||||||
|
|
||||||
|
To eliminate the need for such workarounds, Aqua has the `join` expression that does nothing except consuming its arguments, hence waiting for them:
|
||||||
|
|
||||||
|
```python
|
||||||
|
join responded[n]
|
||||||
|
<- responded
|
||||||
|
```
|
||||||
|
|
||||||
|
You can use any number of arguments to `join`, separating them with a comma. `join` is executed on a particular node – so `join` respects the `on` scopes it's in.
|
||||||
|
|
||||||
|
```python
|
||||||
|
func getTwo(peerA: string, peerB: string) -> string, string:
|
||||||
|
co on peerA:
|
||||||
|
a <- foo()
|
||||||
|
co on peerB:
|
||||||
|
b <- foo()
|
||||||
|
|
||||||
|
-- Without this join, a and b will still be returned,
|
||||||
|
-- But in the join case they are returned by-value,
|
||||||
|
-- While without join they are returned by-name
|
||||||
|
-- and it could happen that they're not actually available in time.
|
||||||
|
join a, b
|
||||||
|
<- a, b
|
||||||
|
```
|
||||||
|
|
||||||
|
## Timeout and race patterns
|
||||||
|
|
||||||
|
To limit the execution time of some part of an Aqua script, you can use a pattern that's often called "race". Execute a function in parallel with `Peer.timeout`, and take results from the first one to complete. 
|
||||||
|
|
||||||
|
This way, you're racing your function against `timeout`. If `timeout` is the first one to complete, consider your function "timed out".
|
||||||
|
|
||||||
|
`Peer.timeout` is defined in [`aqua-lib`](https://github.com/fluencelabs/aqua-lib/blob/1193236/builtin.aqua#L135).
|
||||||
|
|
||||||
|
For this pattern to work, it is important to keep an eye on where exactly the timeout is scheduled and executed. One caveat is that you cannot timeout the unreachable peer by calling a timeout on that peer.
|
||||||
|
|
||||||
|
Here's an example of how to put a timeout on peer traversal:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Peer.timeout comes from the standard library
|
||||||
|
import "@fluencelabs/aqua-lib/builtin"
|
||||||
|
|
||||||
|
func traverse_peers(peers: []string) -> []string:
|
||||||
|
-- go through the array of peers and collect acknowledgments
|
||||||
|
acks: *string
|
||||||
|
for peer <- peers par:
|
||||||
|
on peer:
|
||||||
|
acks <- Service.long_task()
|
||||||
|
|
||||||
|
-- if 10 acks collected or 1 second passed, return acks
|
||||||
|
join acks[9]
|
||||||
|
par Peer.timeout(1000, "timeout")
|
||||||
|
<- acks
|
||||||
|
```
|
||||||
|
|
||||||
|
And here's how to approach error handling when using `Peer.timeout`
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Peer.timeout comes from the standard library
|
||||||
|
import "@fluencelabs/aqua-lib/builtin"
|
||||||
|
|
||||||
|
func getOrNot() -> string:
|
||||||
|
status: *string
|
||||||
|
res: *string
|
||||||
|
-- Move execution to another peer
|
||||||
|
on "other peer":
|
||||||
|
res <- Srv.someFunction()
|
||||||
|
status <<- "ok"
|
||||||
|
-- In parallel with the previous on, run timeout on this peer
|
||||||
|
par status <- Peer.timeout(1000, "timeout")
|
||||||
|
|
||||||
|
-- status! waits for the first write to happen
|
||||||
|
if status! == "timeout":
|
||||||
|
-- Now we know that "other peer" was not able to respond within a second
|
||||||
|
-- Do some failover
|
||||||
|
res <<- "providing a local failover value"
|
||||||
|
|
||||||
|
<- res!
|
||||||
|
```
|
60
language/flow/sequential.md
Normal file
60
language/flow/sequential.md
Normal file
@ -0,0 +1,60 @@
|
|||||||
|
# Sequential
|
||||||
|
|
||||||
|
By default, Aqua code is executed line by line, sequentially.
|
||||||
|
|
||||||
|
## Contract
|
||||||
|
|
||||||
|
* Data from the first arm is available in the second branch.
|
||||||
|
* The second arm is executed if and only if the first arm succeeded.
|
||||||
|
* If any arm failed, then the whole sequence is failed.
|
||||||
|
* If all arms are executed successfully, then the whole sequence is executed successfully.
|
||||||
|
|
||||||
|
## Sequential operations
|
||||||
|
|
||||||
|
### call arrow
|
||||||
|
|
||||||
|
Any runnable piece of code in Aqua is an arrow from its domain to the codomain.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Call a function
|
||||||
|
foo()
|
||||||
|
|
||||||
|
-- Call a function that returns smth, assign results to a variable
|
||||||
|
x <- foo()
|
||||||
|
|
||||||
|
-- Call an ability function
|
||||||
|
y <- Peer.identify()
|
||||||
|
|
||||||
|
-- Pass an argument
|
||||||
|
z <- Op.identity(y)
|
||||||
|
```
|
||||||
|
|
||||||
|
When you write `<-`, this means not just "assign results of the function on the right to variable on the left". It means that all the effects are executed: [service](../abilities-and-services.md) may change state, the [topology](../topology.md) may be shifted. But you end up being (semantically) on the same peer where you have called the arrow.
|
||||||
|
|
||||||
|
### on
|
||||||
|
|
||||||
|
`on` denotes the peer where the code must be executed. `on` is handled sequentially, and the code inside is executed line by line by default.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
func foo():
|
||||||
|
-- Will be executed where `foo` was executed
|
||||||
|
bar()
|
||||||
|
|
||||||
|
-- Move to another peer
|
||||||
|
on another_peer:
|
||||||
|
-- To call bar, we need to leave the peer where we were and get to another_peer
|
||||||
|
-- It's done automagically
|
||||||
|
bar()
|
||||||
|
|
||||||
|
on third_peer via relay:
|
||||||
|
-- This is executed on third_peer
|
||||||
|
-- But we denote that to get to third_peer and to leave third_peer
|
||||||
|
-- an additional hop is needed: get to relay, then to peer
|
||||||
|
bar()
|
||||||
|
|
||||||
|
-- Will be executed in the `foo` call site again
|
||||||
|
-- To get from the previous `bar`, compiler will add a hop to relay
|
||||||
|
bar()
|
||||||
|
```
|
||||||
|
|
||||||
|
See more in the [Topology](../topology.md) section.
|
103
language/header/README.md
Normal file
103
language/header/README.md
Normal file
@ -0,0 +1,103 @@
|
|||||||
|
# Imports And Exports
|
||||||
|
|
||||||
|
An Aqua source file has a head and a body. The body contains function definitions, services, types, constants. The header manages what is imported from other files and what is exported.
|
||||||
|
|
||||||
|
## Module
|
||||||
|
|
||||||
|
By default, `.aqua` file exports and declares everything it contains. With `module` header you can describe the `.aqua` file's interface.
|
||||||
|
|
||||||
|
```python
|
||||||
|
-- Module expression may be only on the very first line of the file
|
||||||
|
module ModuleName declares *
|
||||||
|
```
|
||||||
|
|
||||||
|
`Module.Name` may contain dots.
|
||||||
|
|
||||||
|
`ModuleName` can be used as the module's name when this file is `use`d. In this case, only what is enumerated in `declares` section will be available. `declares *` allows you to declare everything in the file as the module interface.
|
||||||
|
|
||||||
|
```
|
||||||
|
module ModuleName declares CONSTNAME, ServiceName, MyType, fn
|
||||||
|
|
||||||
|
const CONSTNAME = "smth"
|
||||||
|
|
||||||
|
service ServiceName:
|
||||||
|
do_smth()
|
||||||
|
|
||||||
|
data MyType:
|
||||||
|
result: i32
|
||||||
|
|
||||||
|
function fn() -> string:
|
||||||
|
<- CONSTNAME
|
||||||
|
```
|
||||||
|
|
||||||
|
## Import Expression
|
||||||
|
|
||||||
|
The main way to import a file is via `import` expression:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
import "@fluencelabs/aqua-lib/builtin.aqua"
|
||||||
|
|
||||||
|
func foo():
|
||||||
|
Op.noop()
|
||||||
|
```
|
||||||
|
|
||||||
|
Aqua compiler takes a source directory and a list of import directories (usually with `node_modules` as a default). You can use relative paths to `.aqua` files, relatively to the current file's path, and to import folders.
|
||||||
|
|
||||||
|
{% hint style="info" %}
|
||||||
|
`.aqua` extension in `import` and `use` expressions can be ommited. So, `import "builtin.aqua"` does exactly the same as `import "builtin"`.
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
Everything defined in the file is imported into the current namespace.
|
||||||
|
|
||||||
|
You can cherry-pick and rename imports using `import ... from` expression:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import Op as Noop from "@fluencelabs/aqua-lib/builtin"
|
||||||
|
|
||||||
|
func foo():
|
||||||
|
Noop.noop()
|
||||||
|
```
|
||||||
|
|
||||||
|
## Use Expression
|
||||||
|
|
||||||
|
The `use` expression makes it possible to import a file into a named scope.
|
||||||
|
|
||||||
|
```python
|
||||||
|
use Op from "@fluencelabs/aqua-lib/builtin" as BuiltIn
|
||||||
|
|
||||||
|
func foo():
|
||||||
|
BuiltIn.Op.noop()
|
||||||
|
```
|
||||||
|
|
||||||
|
If the imported file has a `module` header, `from` and `as` sections of `use` may be omitted.
|
||||||
|
|
||||||
|
```python
|
||||||
|
use "@fluencelabs/aqua-lib/builtin.aqua"
|
||||||
|
-- Assume that builtin.aqua's header is `module BuiltIn declares *`
|
||||||
|
|
||||||
|
func foo():
|
||||||
|
BuiltIn.Op.noop()
|
||||||
|
```
|
||||||
|
|
||||||
|
## Export
|
||||||
|
|
||||||
|
While it's useful to split the code into several functions into different files, it's not always a good idea to compile everything into the host language.
|
||||||
|
|
||||||
|
Another problem is libraries distribution. If a developer wants to deliver an `.aqua` library, he or she often needs to provide it in compiled form as well.
|
||||||
|
|
||||||
|
`export` lets a developer decide what exactly is going to be exported, including imported functions.
|
||||||
|
|
||||||
|
```python
|
||||||
|
import bar from "lib"
|
||||||
|
|
||||||
|
-- Exported functions and services will be compiled for the host language
|
||||||
|
-- You can use several `export` expressions
|
||||||
|
export foo as my_foo
|
||||||
|
export bar, MySrv
|
||||||
|
|
||||||
|
service MySrv:
|
||||||
|
call_smth()
|
||||||
|
|
||||||
|
func foo() -> bool:
|
||||||
|
<- true
|
||||||
|
```
|
315
language/header/control-scope-and-visibility.md
Normal file
315
language/header/control-scope-and-visibility.md
Normal file
@ -0,0 +1,315 @@
|
|||||||
|
# Control, Scope And Visibility
|
||||||
|
|
||||||
|
In Aqua, the default namespace of a module is the file name and all declarations, i.e., data, services and functions, are public.
|
||||||
|
|
||||||
|
For example, the default.aqua file:
|
||||||
|
|
||||||
|
```python
|
||||||
|
-- default_foo.aqua
|
||||||
|
|
||||||
|
|
||||||
|
func foo() -> string:
|
||||||
|
<- "I am a visible foo func that compiles"
|
||||||
|
```
|
||||||
|
|
||||||
|
Which we compile with
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aqua -i aqua-scripts -o compiled-aqua
|
||||||
|
```
|
||||||
|
|
||||||
|
to obtain Typescript wrapped AIR, `default_foo.ts` in the `compiled-aqua` directory:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { FluenceClient, PeerIdB58 } from '@fluencelabs/fluence';
|
||||||
|
import { RequestFlowBuilder } from '@fluencelabs/fluence/dist/api.unstable';
|
||||||
|
import { RequestFlow } from '@fluencelabs/fluence/dist/internal/RequestFlow';
|
||||||
|
|
||||||
|
|
||||||
|
// Services
|
||||||
|
|
||||||
|
|
||||||
|
// Functions
|
||||||
|
|
||||||
|
export async function foo(client: FluenceClient, config?: {ttl?: number}): Promise<string> {
|
||||||
|
let request: RequestFlow;
|
||||||
|
const promise = new Promise<string>((resolve, reject) => {
|
||||||
|
const r = new RequestFlowBuilder()
|
||||||
|
.disableInjections()
|
||||||
|
.withRawScript(
|
||||||
|
`
|
||||||
|
(xor
|
||||||
|
(seq
|
||||||
|
(call %init_peer_id% ("getDataSrv" "-relay-") [] -relay-)
|
||||||
|
(xor
|
||||||
|
(call %init_peer_id% ("callbackSrv" "response") ["I am a visible foo func that compiles"])
|
||||||
|
(call %init_peer_id% ("errorHandlingSrv" "error") [%last_error% 1])
|
||||||
|
)
|
||||||
|
)
|
||||||
|
(call %init_peer_id% ("errorHandlingSrv" "error") [%last_error% 2])
|
||||||
|
)
|
||||||
|
|
||||||
|
`,
|
||||||
|
)
|
||||||
|
.configHandler((h) => {
|
||||||
|
h.on('getDataSrv', '-relay-', () => {
|
||||||
|
return client.relayPeerId!;
|
||||||
|
});
|
||||||
|
|
||||||
|
h.onEvent('callbackSrv', 'response', (args) => {
|
||||||
|
const [res] = args;
|
||||||
|
resolve(res);
|
||||||
|
});
|
||||||
|
|
||||||
|
h.onEvent('errorHandlingSrv', 'error', (args) => {
|
||||||
|
// assuming error is the single argument
|
||||||
|
const [err] = args;
|
||||||
|
reject(err);
|
||||||
|
});
|
||||||
|
})
|
||||||
|
.handleScriptError(reject)
|
||||||
|
.handleTimeout(() => {
|
||||||
|
reject('Request timed out for foo');
|
||||||
|
})
|
||||||
|
if(config && config.ttl) {
|
||||||
|
r.withTTL(config.ttl)
|
||||||
|
}
|
||||||
|
request = r.build();
|
||||||
|
});
|
||||||
|
await client.initiateFlow(request!);
|
||||||
|
return promise;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Regardless of your output target, i.e. raw AIR or Typescript wrapped AIR, the default module namespace is `default_foo` and `foo` is the compiled function.
|
||||||
|
|
||||||
|
While this default approach is handy for single file, single module development, it makes for inefficient dependency management and unnecessary compilations for multi-module projects. The remainder of this section introduces the scoping and visibility concepts available in Aqua to effectively manage dependencies.
|
||||||
|
|
||||||
|
### Managing Visibility With `module` and `declare`
|
||||||
|
|
||||||
|
By default, all declarations in a module, i.e., _data_, _service_ and _func_, are public. With the `module` declaration, Aqua allows developers to create named modules and define membership visibility where the default visibility of `module` is private. That is, with the `module` declaration all module members are private and do not get compiled.
|
||||||
|
|
||||||
|
Let's create an `export.aqua` file like so:
|
||||||
|
|
||||||
|
```python
|
||||||
|
module Export
|
||||||
|
|
||||||
|
func foo() -> string:
|
||||||
|
<- "I am Export foo"
|
||||||
|
```
|
||||||
|
|
||||||
|
When we compile `export.aqua`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aqua -i aqua-scripts -o compiled-aqua
|
||||||
|
```
|
||||||
|
|
||||||
|
nothing gets compiled as expected:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
2021.09.02 11:31:41 [INFO] Aqua Compiler 0.2.1-219
|
||||||
|
2021.09.02 11:31:42 [INFO] Source /Users/bebo/localdev/aqua-245/documentation-examples/aqua-scripts/export.aqua: compilation OK (nothing to emit)
|
||||||
|
```
|
||||||
|
|
||||||
|
You can further check the output directory, `compiled-aqua`, in our case, for the lack of output files. By corollary, `foo` cannot be imported from another files. For example:
|
||||||
|
|
||||||
|
```python
|
||||||
|
-- import.aqua
|
||||||
|
|
||||||
|
import "export.aqua"
|
||||||
|
|
||||||
|
|
||||||
|
func wrapped_foo() -> string:
|
||||||
|
res <- foo()
|
||||||
|
<- res
|
||||||
|
```
|
||||||
|
|
||||||
|
Results in compile failure since `foo` is not visible to `import.aqua`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
6 func wrapped_foo() -> string:
|
||||||
|
7 res <- foo()
|
||||||
|
^^^==
|
||||||
|
Undefined arrow, available: HOST_PEER_ID, INIT_PEER_ID, nil, LAST_ERROR
|
||||||
|
8 <- res
|
||||||
|
```
|
||||||
|
|
||||||
|
We can use `declares` to create visibility for a `module` namespace for **consuming** modules. For example,
|
||||||
|
|
||||||
|
```python
|
||||||
|
-- export.aqua
|
||||||
|
module Export declares foo
|
||||||
|
|
||||||
|
func bar() -> string:
|
||||||
|
<- " I am MyFooBar bar"
|
||||||
|
|
||||||
|
func foo() -> string:
|
||||||
|
res <- bar()
|
||||||
|
<- res
|
||||||
|
```
|
||||||
|
|
||||||
|
in and by itself does not result in compiled Aqua:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aqua -i aqua-scripts -o compiled-aqua -a
|
||||||
|
Aqua JS: node /Users/bebo/.nvm/versions/node/v14.16.0/lib/node_modules/@fluencelabs/aqua/aqua.js -i aqua-scripts -o compiled-aqua -a
|
||||||
|
Aqua JS:
|
||||||
|
Aqua JS: 2021.09.08 13:36:17 [INFO] Aqua Compiler 0.3.0-222
|
||||||
|
2021.09.08 13:36:21 [INFO] Source /Users/bebo/localdev/aqua-245/documentation-examples/aqua-scripts/export.aqua: compilation OK (nothing to emit)
|
||||||
|
```
|
||||||
|
|
||||||
|
But once we link from another module, e.g.:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import foo from "export.aqua"
|
||||||
|
|
||||||
|
func foo_wrapper() -> string:
|
||||||
|
res <- foo()
|
||||||
|
<- res
|
||||||
|
```
|
||||||
|
|
||||||
|
We get the appropriate result:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
2021.09.08 13:40:17 [INFO] Source /Users/bebo/localdev/aqua-245/documentation-examples/aqua-scripts/export.aqua: compilation OK (nothing to emit)
|
||||||
|
2021.09.08 13:40:17 [INFO] Result /Users/bebo/localdev/aqua-245/documentation-examples/compiled-aqua/import.ts: compilation OK (1 functions)
|
||||||
|
```
|
||||||
|
|
||||||
|
in form of `import.ts`:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// compiled-aqua/import.ts
|
||||||
|
import { FluencePeer } from '@fluencelabs/fluence';
|
||||||
|
import {
|
||||||
|
ResultCodes,
|
||||||
|
RequestFlow,
|
||||||
|
RequestFlowBuilder,
|
||||||
|
CallParams,
|
||||||
|
} from '@fluencelabs/fluence/dist/internal/compilerSupport/v1';
|
||||||
|
|
||||||
|
|
||||||
|
// Services
|
||||||
|
|
||||||
|
|
||||||
|
// Functions
|
||||||
|
|
||||||
|
export function foo_wrapper(config?: {ttl?: number}) : Promise<string>;
|
||||||
|
export function foo_wrapper(peer: FluencePeer, config?: {ttl?: number}) : Promise<string>;
|
||||||
|
export function foo_wrapper(...args) {
|
||||||
|
let peer: FluencePeer;
|
||||||
|
|
||||||
|
let config;
|
||||||
|
if (args[0] instanceof FluencePeer) {
|
||||||
|
peer = args[0];
|
||||||
|
config = args[1];
|
||||||
|
} else {
|
||||||
|
peer = FluencePeer.default;
|
||||||
|
config = args[0];
|
||||||
|
}
|
||||||
|
|
||||||
|
let request: RequestFlow;
|
||||||
|
const promise = new Promise<string>((resolve, reject) => {
|
||||||
|
const r = new RequestFlowBuilder()
|
||||||
|
.disableInjections()
|
||||||
|
.withRawScript(
|
||||||
|
`
|
||||||
|
(xor
|
||||||
|
(seq
|
||||||
|
(call %init_peer_id% ("getDataSrv" "-relay-") [] -relay-)
|
||||||
|
(xor
|
||||||
|
(call %init_peer_id% ("callbackSrv" "response") [" I am MyFooBar bar"])
|
||||||
|
(call %init_peer_id% ("errorHandlingSrv" "error") [%last_error% 1])
|
||||||
|
)
|
||||||
|
)
|
||||||
|
(call %init_peer_id% ("errorHandlingSrv" "error") [%last_error% 2])
|
||||||
|
)
|
||||||
|
|
||||||
|
`,
|
||||||
|
)
|
||||||
|
.configHandler((h) => {
|
||||||
|
h.on('getDataSrv', '-relay-', () => {
|
||||||
|
return peer.connectionInfo.connectedRelay ;
|
||||||
|
});
|
||||||
|
|
||||||
|
h.onEvent('callbackSrv', 'response', (args) => {
|
||||||
|
const [res] = args;
|
||||||
|
resolve(res);
|
||||||
|
});
|
||||||
|
|
||||||
|
h.onEvent('errorHandlingSrv', 'error', (args) => {
|
||||||
|
const [err] = args;
|
||||||
|
reject(err);
|
||||||
|
});
|
||||||
|
})
|
||||||
|
.handleScriptError(reject)
|
||||||
|
.handleTimeout(() => {
|
||||||
|
reject('Request timed out for foo_wrapper');
|
||||||
|
})
|
||||||
|
if(config && config.ttl) {
|
||||||
|
r.withTTL(config.ttl)
|
||||||
|
}
|
||||||
|
request = r.build();
|
||||||
|
});
|
||||||
|
peer.internals.initiateFlow(request!);
|
||||||
|
return promise;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Of course, if we change `import.aqua` to include the private `bar`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import bar from "export.aqua"
|
||||||
|
|
||||||
|
func bar_wrapper() -> string:
|
||||||
|
res <- bar()
|
||||||
|
<- res
|
||||||
|
```
|
||||||
|
|
||||||
|
We get the expected error:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import bar from "export.aqua"
|
||||||
|
^^^===================
|
||||||
|
Imported file declares [foo], no bar declared. Try adding `declares *` to that file.
|
||||||
|
```
|
||||||
|
|
||||||
|
As indicated in the error message, `declares *` makes all members of the namespace public, although we can be quite fine-grained and use a comma separated list of members we want to be visible, such as `declares foo, bar`.
|
||||||
|
|
||||||
|
### Scoping Inclusion With `use` and `import`
|
||||||
|
|
||||||
|
We already encountered the `import` statement earlier. Using `import` with the file name, e.g., `import "export.aqua"`, imports all visible, i.e., public, members from the dependency. We can manage import granularity with the `from` modifier, e.g., `import foo from "file.aqua"`, to limit our imports and subsequent compilation outputs. Moreover, we can alias imported declarations with the `as` modifier, e.g.,`import foo as HighFoo, bar as LowBar from "export_file.aqua"`.
|
||||||
|
|
||||||
|
In addition to `import`, we also have the `use` keyword available to link and scope. The difference between`use` and `import` is that `use` brings in module namespaces declared in the referenced source file. For example:
|
||||||
|
|
||||||
|
```python
|
||||||
|
-- export.aqua
|
||||||
|
module ExportModule declares foo
|
||||||
|
|
||||||
|
func foo() -> string:
|
||||||
|
<- "I am a foo fighter"
|
||||||
|
```
|
||||||
|
|
||||||
|
declares the `ExportModule` namespace and makes `foo` visible. We can now bring `foo` into scope by means of its module namespace `ExportModule` in our import file without having to (re-) declare anything:
|
||||||
|
|
||||||
|
```python
|
||||||
|
-- import.aqua
|
||||||
|
use "export.aqua"
|
||||||
|
|
||||||
|
func foo -> string:
|
||||||
|
res <- ExportModule.foo()
|
||||||
|
<- res
|
||||||
|
```
|
||||||
|
|
||||||
|
This example already illustrates the power of `use` as we now can declare a local `foo` function rather than the `foo_wrapper` we used earlier. `use` provides very clear namespace separation that is fully enforced at the compiler level allowing developers to build, update and extend complex code bases with clear namespace separation by default.
|
||||||
|
|
||||||
|
The default behavior for `use` is to use the dependent filename if no module declaration was provided. Moreover, we can use the `as` modifier to change the module namespace. Continuing with the above example:
|
||||||
|
|
||||||
|
```python
|
||||||
|
-- import.aqua
|
||||||
|
use "export.aqua" as RenamedExport
|
||||||
|
|
||||||
|
func foo() -> string:
|
||||||
|
-- res <- ExportModule.foo() --< this fails
|
||||||
|
res <- RenamedExport.foo()
|
||||||
|
<- res
|
||||||
|
```
|
210
language/topology.md
Normal file
210
language/topology.md
Normal file
@ -0,0 +1,210 @@
|
|||||||
|
---
|
||||||
|
description: Define where the code is to be executed and how to get there
|
||||||
|
---
|
||||||
|
|
||||||
|
# Topology
|
||||||
|
|
||||||
|
Aqua lets developers describe the whole distributed workflow in a single script, link data, recover from errors, implement complex patterns like backpressure, and more. Hence, the network topology is at the heart of Aqua.
|
||||||
|
|
||||||
|
Topology in Aqua is declarative: You just need to say where (on what peer) a piece of code must be executed, and optionally how to get there. The Aqua compiler will add all the required network hops.
|
||||||
|
|
||||||
|
## On expression
|
||||||
|
|
||||||
|
`on` expression moves execution to the specified peer:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
on "my peer":
|
||||||
|
foo()
|
||||||
|
```
|
||||||
|
|
||||||
|
Here, `foo` is instructed to be executed on a peer with id `my peer`. `on` supports variables of type `string` :
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- foo, bar, baz are instructed to be executed on myPeer
|
||||||
|
on myPeer:
|
||||||
|
foo()
|
||||||
|
bar()
|
||||||
|
baz()
|
||||||
|
```
|
||||||
|
|
||||||
|
{% hint style="danger" %}
|
||||||
|
`on` does not add network hops on its own: if there are no service calls inside the `on` scope, the node will not be reached. Use `via` to affect the topology without service calls.
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
## `INIT_PEER_ID`
|
||||||
|
|
||||||
|
There is one custom peer ID that is always in scope: `INIT_PEER_ID`. It points to the peer that initiated this request.
|
||||||
|
|
||||||
|
{% hint style="warning" %}
|
||||||
|
Using `on INIT_PEER_ID` is an anti-pattern: There is no way to ensure that init peer is accessible from the currently used part of the network.
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
## `HOST_PEER_ID`
|
||||||
|
|
||||||
|
This constant is resolved on compilation time to point on the relay (the host the client is connected to) if Aqua is compiled to be used behind the relay (default mode, targets web browsers and other devices that needs a relay to receive incoming connections), and on `INIT_PEER_ID` otherwise.
|
||||||
|
|
||||||
|
## More complex scenarios
|
||||||
|
|
||||||
|
Consider this example:
|
||||||
|
|
||||||
|
```go
|
||||||
|
func foo():
|
||||||
|
on "peer foo":
|
||||||
|
do_foo()
|
||||||
|
|
||||||
|
func bar(i: i32):
|
||||||
|
do_bar()
|
||||||
|
|
||||||
|
func baz():
|
||||||
|
bar(1)
|
||||||
|
on "peer baz":
|
||||||
|
foo()
|
||||||
|
bar(2)
|
||||||
|
bar(3)
|
||||||
|
```
|
||||||
|
|
||||||
|
Take a minute to think about:
|
||||||
|
|
||||||
|
* Where is `do_foo` executed?
|
||||||
|
* Where is `bar(1)` executed?
|
||||||
|
* On what node `bar(2)` runs?
|
||||||
|
* What about `bar(3)`?
|
||||||
|
|
||||||
|
Declarative topology definition always works the same way.
|
||||||
|
|
||||||
|
* `do_foo` is executed on "peer foo", always.
|
||||||
|
* `bar(1)` is executed on the same node where `baz` was running. If `baz` is the first called function, then it's `INIT_PEER_ID`.
|
||||||
|
* `bar(2)` is executed on `"peer baz"`, despite the fact that foo does topologic transition. `bar(2)` is in the scope of `on "peer baz"`, so it will be executed there
|
||||||
|
* `bar(3)` is executed where `bar(1)` was: in the root scope of `baz`, wherever it was called from
|
||||||
|
|
||||||
|
## Accessing peers `via` other peers
|
||||||
|
|
||||||
|
In a distributed network it is quite common that a peer is not directly accessible. For example, a browser has no public network interface and you cannot open a socket to a browser at will. Such constraints warrant a `relay` pattern: there should be a well-connected peer that relays requests from a peer to the network and vice versa.
|
||||||
|
|
||||||
|
Relays are handled with `via`:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- When we go to some peer from some other peer,
|
||||||
|
-- the compiler will add an additional hop to some relay
|
||||||
|
on "some peer" via "some relay":
|
||||||
|
foo()
|
||||||
|
|
||||||
|
-- More complex path: first go to relay2, then to relay1,
|
||||||
|
-- then to peer. When going out of peer, do it in reverse
|
||||||
|
on "peer" via relay1 via relay2:
|
||||||
|
foo()
|
||||||
|
|
||||||
|
-- You can pass any collection of strings to relay,
|
||||||
|
-- and it will go through it if it's defined,
|
||||||
|
-- or directly if not
|
||||||
|
func doViaRelayMaybe(peer: string, relayMaybe: ?string):
|
||||||
|
on peer via relayMaybe:
|
||||||
|
foo()
|
||||||
|
```
|
||||||
|
|
||||||
|
`on`s nested or delegated in functions work just as you expect:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- From where we are, -> relay1 -> peer1
|
||||||
|
on "peer1" via "relay1":
|
||||||
|
-- On peer1
|
||||||
|
foo()
|
||||||
|
-- now go -> relay1 -> relay2 -> peer2
|
||||||
|
-- going to relay1 to exit peer1
|
||||||
|
-- going to relay2 to enable access to peer2
|
||||||
|
on "peer2" via "relay2":
|
||||||
|
-- On peer2
|
||||||
|
foo()
|
||||||
|
-- This is executed in the root scope, after we were on peer2
|
||||||
|
-- How to get there?
|
||||||
|
-- Compiler knows the path that just worked
|
||||||
|
-- So it goes -> relay2 -> relay1 -> (where we were)
|
||||||
|
foo()
|
||||||
|
```
|
||||||
|
|
||||||
|
With `on` and `on ... via`, significant indentation changes the place where the code will be executed, and paths that are taken when execution flow "bubbles up" (see the last call of `foo`). It's more efficient to keep the flow as flat as it could. Consider the following change of indentation in the previous script, and how it affects execution:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- From where we are, -> relay1 -> peer1
|
||||||
|
on "peer1" via "relay1":
|
||||||
|
-- On peer1
|
||||||
|
foo()
|
||||||
|
-- now go -> relay1 -> relay2 -> peer2
|
||||||
|
-- going to relay1 to exit peer1
|
||||||
|
-- going to relay2 to enable access to peer2
|
||||||
|
on "peer2" via "relay2":
|
||||||
|
-- On peer2
|
||||||
|
foo()
|
||||||
|
-- This is executed in the root scope, after we were on peer2
|
||||||
|
-- How to get there?
|
||||||
|
-- Compiler knows the path that just worked
|
||||||
|
-- So it goes -> relay2 -> (where we were)
|
||||||
|
foo()
|
||||||
|
```
|
||||||
|
|
||||||
|
When the `on` scope is ended, it does not affect any further topology moves. Until you stop indentation, `on` affects the topology and may add additional topology moves, which means more roundtrips and unnecessary latency.
|
||||||
|
|
||||||
|
## Callbacks
|
||||||
|
|
||||||
|
What if you want to return something to the initial peer? For example, implement a request-response pattern. Or send a bunch of requests to different peers, and render responses as they come, in any order.
|
||||||
|
|
||||||
|
This can be done with callback arguments in the entry function:
|
||||||
|
|
||||||
|
```go
|
||||||
|
func run(updateModel: Model -> (), logMessage: string -> ()):
|
||||||
|
on "some peer":
|
||||||
|
m <- fetchModel()
|
||||||
|
updateModel(m)
|
||||||
|
par on "other peer":
|
||||||
|
x <- getMessage()
|
||||||
|
logMessage(x)
|
||||||
|
```
|
||||||
|
|
||||||
|
Callbacks have the [arrow type](types.md#arrow-types).
|
||||||
|
|
||||||
|
If you pass just ordinary functions as arrow-type arguments, they will work as if you hardcode them.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
func foo():
|
||||||
|
on "peer 1":
|
||||||
|
doFoo()
|
||||||
|
|
||||||
|
func bar(cb: -> ()):
|
||||||
|
on "peer2":
|
||||||
|
cb()
|
||||||
|
|
||||||
|
func baz():
|
||||||
|
-- foo will go to peer 1
|
||||||
|
-- bar will go to peer 2
|
||||||
|
bar(foo)
|
||||||
|
```
|
||||||
|
|
||||||
|
If you pass a service call as a callback, it will be executed locally on the node where you called it. That might change.
|
||||||
|
|
||||||
|
Functions that capture the topologic context of the definition site are planned, not yet there. **Proposed** syntax:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
func baz():
|
||||||
|
foo = do (x: u32):
|
||||||
|
-- Executed there, where foo is called
|
||||||
|
Srv.call(x)
|
||||||
|
<- x
|
||||||
|
-- When foo is called, it will get back to this context
|
||||||
|
bar(foo)
|
||||||
|
```
|
||||||
|
|
||||||
|
{% embed url="https://github.com/fluencelabs/aqua/issues/183" %}
|
||||||
|
Issue for adding \`do\` expression
|
||||||
|
{% endembed %}
|
||||||
|
|
||||||
|
{% hint style="warning" %}
|
||||||
|
Passing service function calls as arguments is very fragile as it does not track that the service is resolved in the scope of the call. Abilities variance may fix that.
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
## Parallel execution and topology
|
||||||
|
|
||||||
|
When blocks are executed in parallel, it is not always necessary to resolve the topology to get to the next peer. The compiler will add topologic hops from the par branch only if data defined in that branch is used down the flow.
|
||||||
|
|
||||||
|
{% hint style="danger" %}
|
||||||
|
What if all branches do not return? Execution will halt. Be careful, use `co` if you don't care about the returned data.
|
||||||
|
{% endhint %}
|
173
language/types.md
Normal file
173
language/types.md
Normal file
@ -0,0 +1,173 @@
|
|||||||
|
# Types
|
||||||
|
|
||||||
|
## Scalars
|
||||||
|
|
||||||
|
Scalar types follow the Wasm IT notation.
|
||||||
|
|
||||||
|
* Unsigned numbers: `u8`, `u16`, `u32`, `u64`
|
||||||
|
* Signed numbers: `i8`, `i16`, `i32`, `i64`
|
||||||
|
* Floats: `f32`, `f64`
|
||||||
|
* Boolean: `bool`
|
||||||
|
* String: `string`
|
||||||
|
* Records (product type): see below
|
||||||
|
* Arrays: see [Collection Types](types.md#collection-types) below
|
||||||
|
|
||||||
|
## Literals
|
||||||
|
|
||||||
|
You can pass booleans (true, false), numbers, double-quoted strings as literals.
|
||||||
|
|
||||||
|
## Products
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
data ProductName:
|
||||||
|
field_name: string
|
||||||
|
|
||||||
|
data OtherProduct:
|
||||||
|
product: ProductName
|
||||||
|
flag: bool
|
||||||
|
```
|
||||||
|
|
||||||
|
Fields are accessible with the dot operator `.` , e.g. `product.field`.
|
||||||
|
|
||||||
|
## Collection Types
|
||||||
|
|
||||||
|
Aqua has three different types with variable length, denoted by quantifiers `[]`, `*`, and `?`.
|
||||||
|
|
||||||
|
Immutable collection with 0..N values: `[]`
|
||||||
|
|
||||||
|
Immutable collection with 0 or 1 value: `?`
|
||||||
|
|
||||||
|
Appendable collection with 0..N values: `*`
|
||||||
|
|
||||||
|
Any data type can be prepended with a quantifier, e.g. `*u32`, `[][]string`, `?ProductType` are all correct type specifications.
|
||||||
|
|
||||||
|
You can access a distinct value of a collection with `!` operator, optionally followed by an index.
|
||||||
|
|
||||||
|
It is possible to fill any collection with an empty one using `nil`.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
strict_array: []u32
|
||||||
|
array_of_arrays: [][]u32
|
||||||
|
element_5 = strict_array!5
|
||||||
|
element_0 = strict_array!0
|
||||||
|
element_0_anotherway = strict_array!
|
||||||
|
|
||||||
|
-- It could be an argument or any other collection
|
||||||
|
maybe_value: ?string
|
||||||
|
-- This ! operator will FAIL if maybe_value is backed by a read-only data structure
|
||||||
|
-- And will WAIT if maybe_value is backed with a stream (*string)
|
||||||
|
value = maybe_value!
|
||||||
|
|
||||||
|
-- Consider a function that takes a collection as an argument
|
||||||
|
func foo(a: ?string, b: []u32, c: *bool): ...
|
||||||
|
|
||||||
|
-- To call that function with empty collection, use nil, [], ?[], or *[]:
|
||||||
|
foo(nil, [], *[])
|
||||||
|
-- Nil fits into any collection
|
||||||
|
```
|
||||||
|
|
||||||
|
## Arrow Types
|
||||||
|
|
||||||
|
Every function has an arrow type that maps a list of input types to an optional output type.
|
||||||
|
|
||||||
|
It can be denoted as: `Type1, Type2 -> Result`
|
||||||
|
|
||||||
|
In the type definition, the absence of a result is denoted with `()`, e.g., `string -> ()`
|
||||||
|
|
||||||
|
The absence of arguments is denoted `-> ()`.That is, this mapping takes no argument and has no return type.
|
||||||
|
|
||||||
|
Note that there's no `Unit` type in Aqua: you cannot assign a non-existing result to a value.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Assume that arrow has type: -> ()
|
||||||
|
|
||||||
|
-- This is possible:
|
||||||
|
arrow()
|
||||||
|
|
||||||
|
-- This will lead to error:
|
||||||
|
x <- arrow()
|
||||||
|
```
|
||||||
|
|
||||||
|
## Type Alias
|
||||||
|
|
||||||
|
For convenience, you can alias a type:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
alias MyAlias : ?string
|
||||||
|
```
|
||||||
|
|
||||||
|
## Type Variance
|
||||||
|
|
||||||
|
Aqua is made for composing data on the open network. That means that you want to compose things if they do compose, even if you don't control its source code.
|
||||||
|
|
||||||
|
Therefore Aqua follows the structural typing paradigm: if a type contains all the expected data, then it fits. For example, you can pass `u8` in place of `u16` or `i16`. Or `?bool` in place of `[]bool`. Or `*string` instead of `?string` or `[]string`. The same holds for products.
|
||||||
|
|
||||||
|
For arrow types, Aqua checks the variance on arguments and contravariance on the return type.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- We expect u32
|
||||||
|
xs: *u32
|
||||||
|
|
||||||
|
-- u16 is less then u32
|
||||||
|
foo1: -> u16
|
||||||
|
-- works
|
||||||
|
xs <- foo1()
|
||||||
|
|
||||||
|
-- i32 has sign, so cannot fit into u32
|
||||||
|
foo2: -> i32
|
||||||
|
-- will fail
|
||||||
|
xs <- foo2()
|
||||||
|
|
||||||
|
-- Function takes an arrow as an argument
|
||||||
|
func bar(callback: u32 -> u32): ...
|
||||||
|
|
||||||
|
|
||||||
|
foo3: u16 -> u16
|
||||||
|
|
||||||
|
-- Will not work
|
||||||
|
bar(foo3)
|
||||||
|
|
||||||
|
foo4: u16 -> u64
|
||||||
|
|
||||||
|
-- Works
|
||||||
|
bar(foo4)
|
||||||
|
```
|
||||||
|
|
||||||
|
Arrow type `A: D -> C` is a subtype of `A1: D1 -> C1`, if `D1` is a subtype of `D` and `C` is a subtype of `C1`.
|
||||||
|
|
||||||
|
## Type Of A Service And A File
|
||||||
|
|
||||||
|
A service type is a product of arrows.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
service MyService:
|
||||||
|
foo(arg: string) -> bool
|
||||||
|
|
||||||
|
-- type of this service is:
|
||||||
|
data MyServiceType:
|
||||||
|
foo: string -> bool
|
||||||
|
```
|
||||||
|
|
||||||
|
The file is a product of all defined constants and functions (treated as arrows). Type definitions in the file do not go to the file type.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- MyFile.aqua
|
||||||
|
|
||||||
|
func foo(arg: string) -> bool:
|
||||||
|
...
|
||||||
|
|
||||||
|
const FLAG ?= true
|
||||||
|
|
||||||
|
-- type of MyFile.aqua
|
||||||
|
data MyServiceType:
|
||||||
|
foo: string -> bool
|
||||||
|
flag: bool
|
||||||
|
```
|
||||||
|
|
||||||
|
See [Imports and Exports](header/#module) for module declarations.
|
||||||
|
|
||||||
|
{% embed url="https://github.com/fluencelabs/aqua/blob/main/types/src/main/scala/aqua/types/Type.scala" %}
|
||||||
|
See the types system implementation
|
||||||
|
{% endembed %}
|
335
language/variables.md
Normal file
335
language/variables.md
Normal file
@ -0,0 +1,335 @@
|
|||||||
|
# Values
|
||||||
|
|
||||||
|
Aqua is all about combining data and computations. The runtime for the compiled Aqua code, [AquaVM](https://github.com/fluencelabs/aquavm), tracks what data comes from what origin, which constitutes the foundation for distributed systems security. That approach, driven by π-calculus and security considerations of open-by-default networks and distributed applications as custom application protocols, also puts constraints on the language that configures it.
|
||||||
|
|
||||||
|
Values in Aqua are backed by VDS (Verifiable Data Structures) in the runtime. All operations on values must keep the authenticity of data, prooved by signatures under the hood.
|
||||||
|
|
||||||
|
That's why values are immutable. Changing the value effectively makes a new one:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
x = "hello"
|
||||||
|
y = "world"
|
||||||
|
|
||||||
|
-- despite the sources of x and y, z's origin is "peer 1"
|
||||||
|
-- and we can trust value of z as much as we trust "peer 1"
|
||||||
|
on "peer 1":
|
||||||
|
z <- concat(x, y)
|
||||||
|
```
|
||||||
|
|
||||||
|
More on that in the Security section. Now let's see how we can work with values inside the language.
|
||||||
|
|
||||||
|
## Arguments
|
||||||
|
|
||||||
|
Function arguments are available within the whole function body.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
func foo(arg: i32, log: string -> ()):
|
||||||
|
-- Use data arguments
|
||||||
|
bar(arg)
|
||||||
|
|
||||||
|
-- Arguments can have arrow type and be used as strings
|
||||||
|
log("Wrote arg to responses")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Return values
|
||||||
|
|
||||||
|
You can assign the results of an arrow call to a name and use this returned value in the code below.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Imagine a Stringify service that's always available
|
||||||
|
service Stringify("stringify"):
|
||||||
|
i32ToStr(arg: i32) -> string
|
||||||
|
|
||||||
|
-- Define the result type of a function
|
||||||
|
func bar(arg: i32) -> string:
|
||||||
|
-- Make a value, name it x
|
||||||
|
x <- Stringify.i32ToStr(arg)
|
||||||
|
-- Starting from there, you can use x
|
||||||
|
-- Pass x out of the function scope as the return value
|
||||||
|
<- x
|
||||||
|
|
||||||
|
|
||||||
|
func foo(arg: i32, log: *string):
|
||||||
|
-- Use bar to convert arg to string, push that string
|
||||||
|
-- to logs stream, return nothing
|
||||||
|
log <- bar(arg)
|
||||||
|
```
|
||||||
|
|
||||||
|
Aqua functions may return more than one value.
|
||||||
|
|
||||||
|
```python
|
||||||
|
-- Define return types as a comma separated list
|
||||||
|
func myFunc() -> bool, string:
|
||||||
|
-- Function must return values for all defined types
|
||||||
|
<- true, "successful execution"
|
||||||
|
|
||||||
|
func otherFunc():
|
||||||
|
-- Call a function, don't use returns
|
||||||
|
myFunc()
|
||||||
|
-- Get any number of results out of the function
|
||||||
|
flag <- myFunc()
|
||||||
|
|
||||||
|
-- Push results to a stream
|
||||||
|
results: *string
|
||||||
|
is_ok, results <- myFunc()
|
||||||
|
if is_ok:
|
||||||
|
-- We know that it contains successful result
|
||||||
|
foo(results!)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Literals
|
||||||
|
|
||||||
|
Aqua supports just a few literals: numbers, quoted strings, booleans, and `nil`. You [cannot init a structure](https://github.com/fluencelabs/aqua/issues/167) in Aqua, only obtain it as a result of a function call.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- String literals cannot contain double quotes
|
||||||
|
-- No single-quoted strings allowed, no escape chars.
|
||||||
|
foo("double quoted string literal")
|
||||||
|
|
||||||
|
-- Booleans are true or false
|
||||||
|
if x == false:
|
||||||
|
foo("false is a literal")
|
||||||
|
|
||||||
|
-- Numbers are different
|
||||||
|
-- Any number:
|
||||||
|
bar(1)
|
||||||
|
|
||||||
|
-- Signed number:
|
||||||
|
bar(-1)
|
||||||
|
|
||||||
|
-- Float:
|
||||||
|
bar(-0.2)
|
||||||
|
|
||||||
|
func takesMaybe(arg: ?string): ...
|
||||||
|
|
||||||
|
-- nil can be passed in every place
|
||||||
|
-- where a read-only collection fits
|
||||||
|
takesMaybe(nil)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Arithmetic operators
|
||||||
|
|
||||||
|
Aqua offers a list of arithmetic and logic operators, introduced in Aqua 0.7.1.
|
||||||
|
|
||||||
|
```python
|
||||||
|
-- Addition
|
||||||
|
func foo(a: i32, b: i32) -> i32:
|
||||||
|
c = a + b
|
||||||
|
d = c - a
|
||||||
|
e = d * c
|
||||||
|
f = e / d
|
||||||
|
-- power 3: unsigned number expected
|
||||||
|
g = f ** 3
|
||||||
|
-- remain
|
||||||
|
e = g % f
|
||||||
|
|
||||||
|
-- Can use arithmetics anywhere
|
||||||
|
-- Better use brackets to enforce ordering
|
||||||
|
<- (a + b) - (c + d * (e - 6 ))
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
## Collections
|
||||||
|
|
||||||
|
With Aqua it is possible to create a [stream](crdt-streams.md), fill it with values, and use in place of any collection:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- foo returns an array
|
||||||
|
func foo() -> []string:
|
||||||
|
-- Initiate a typed stream
|
||||||
|
ret: *string
|
||||||
|
|
||||||
|
-- Push values into the stream
|
||||||
|
ret <<- "first"
|
||||||
|
ret <<- "second"
|
||||||
|
|
||||||
|
-- Return the stream in place of the array
|
||||||
|
<- ret
|
||||||
|
```
|
||||||
|
|
||||||
|
Aqua provides syntax sugar for creating any of the collection types with `[ ... ]` for arrays, `?[ ... ]` for optional values, `*[ ... ]` for streams.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
func foo() -> []string, ?bool, *u32:
|
||||||
|
<- ["string1", "string2"], ?[true, false], *[1, 3, 5]
|
||||||
|
```
|
||||||
|
|
||||||
|
The `?[]` expression takes any number of arguments, but returns an optional value that contains only `0` or `1` value. This is done by trying to yield these values one by one. The first value that yields without an error will be added to the resulting option.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
func getFlag(maybeFlagA: ?bool, maybeFlagB: ?bool, default: bool) -> bool:
|
||||||
|
res = ?[maybeFlagA!, maybeFlagB!, default]
|
||||||
|
<- res!
|
||||||
|
```
|
||||||
|
|
||||||
|
As of Aqua `0.6.3`, it is not possible to get an element by index directly from the collection creation expression.
|
||||||
|
|
||||||
|
## Getters
|
||||||
|
|
||||||
|
In Aqua, you can use a getter to peak into a field of a product or indexed element in an array.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
data Sub:
|
||||||
|
sub: string
|
||||||
|
|
||||||
|
data Example:
|
||||||
|
field: u32
|
||||||
|
arr: []Sub
|
||||||
|
child: Sub
|
||||||
|
|
||||||
|
func foo(e: Example):
|
||||||
|
bar(e.field) -- u32
|
||||||
|
bar(e.child) -- Sub
|
||||||
|
bar(e.child.sub) -- string
|
||||||
|
bar(e.arr) -- []Sub
|
||||||
|
bar(e.arr!) -- gets the 0 element
|
||||||
|
bar(e.arr!.sub) -- string
|
||||||
|
bar(e.arr!2) -- gets the 2nd element
|
||||||
|
bar(e.arr!2.sub) -- string
|
||||||
|
bar(e.arr[2]) -- gets the 2nd element
|
||||||
|
bar(e.arr[2].sub) -- string
|
||||||
|
bar(e.arr[e.field]) -- can use any scalar as index with [] syntax
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that the `!` operator may fail or halt:
|
||||||
|
|
||||||
|
* If it is called on an immutable collection, it will fail if the collection is shorter and has no given index; you can handle the error with [try](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/conditional.md#try) or [otherwise](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/conditional.md#otherwise).
|
||||||
|
* If it is called on an appendable stream, it will wait for some parallel append operation to fulfill, see [Join behavior](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/parallel.md#join-behavior).
|
||||||
|
|
||||||
|
{% hint style="warning" %}
|
||||||
|
The `!` operator can currently only be used with literal indices.\
|
||||||
|
That is,`!2` is valid but`!x` is not valid.\
|
||||||
|
To access an index with non-literal, use the brackets index, like \[x].
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
## Assignments
|
||||||
|
|
||||||
|
Assignments, `=`, only give a name to a value with an applied getter or to a literal.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
func foo(arg: bool, e: Example):
|
||||||
|
-- Rename the argument
|
||||||
|
a = arg
|
||||||
|
-- Assign the name b to value of e.child
|
||||||
|
b = e.child
|
||||||
|
-- Create a named literal
|
||||||
|
c = "just string value"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Constants
|
||||||
|
|
||||||
|
Constants are like assignments but in the root scope. They can be used in all function bodies, textually below the place of const definition. Constant values must resolve to a literal.
|
||||||
|
|
||||||
|
You can change the compilation results by overriding a constant but the override needs to be of the same type or subtype.
|
||||||
|
|
||||||
|
Constants are always `UPPER_CASE`.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- This FLAG is always true
|
||||||
|
const FLAG = true
|
||||||
|
|
||||||
|
-- This SETTING can be overwritten via CLI flag
|
||||||
|
const SETTING ?= "value"
|
||||||
|
|
||||||
|
func foo(arg: string): ...
|
||||||
|
|
||||||
|
func bar():
|
||||||
|
-- Type of SETTING is string
|
||||||
|
foo(SETTING)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Visibility scopes
|
||||||
|
|
||||||
|
Visibility scopes follow the contracts of execution flow.
|
||||||
|
|
||||||
|
By default, everything defined textually above is available below. With some exceptions.
|
||||||
|
|
||||||
|
Functions have isolated scopes:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
func foo():
|
||||||
|
a = 5
|
||||||
|
|
||||||
|
func bar():
|
||||||
|
-- a is not defined in this function scope
|
||||||
|
a = 7
|
||||||
|
foo() -- a inside fo is 5
|
||||||
|
```
|
||||||
|
|
||||||
|
[For loop](flow/iterative.md#export-data-from-for) does not export anything from it:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
func foo():
|
||||||
|
x = 5
|
||||||
|
for y <- ys:
|
||||||
|
-- Can use what was defined above
|
||||||
|
z <- bar(x)
|
||||||
|
|
||||||
|
-- z is not defined in scope
|
||||||
|
z = 7
|
||||||
|
```
|
||||||
|
|
||||||
|
[Parallel](flow/parallel.md#join-behavior) branches have [no access](https://github.com/fluencelabs/aqua/issues/90) to each other's data:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- This will deadlock, as foo branch of execution will
|
||||||
|
-- never send x to a parallel bar branch
|
||||||
|
x <- foo()
|
||||||
|
par y <- bar(x)
|
||||||
|
|
||||||
|
-- After par is executed, all the can be used
|
||||||
|
baz(x, y)
|
||||||
|
```
|
||||||
|
|
||||||
|
Recovery branches in [conditional flow](https://github.com/fluencelabs/aqua-book/tree/4177e00f9313f0e1eb0a60015e1c19a956c065bd/language/operators/conditional.md) have no access to the main branch as the main branch exports values, whereas the recovery branch does not:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
try:
|
||||||
|
x <- foo()
|
||||||
|
otherwise:
|
||||||
|
-- this is not possible – will fail
|
||||||
|
bar(x)
|
||||||
|
y <- baz()
|
||||||
|
|
||||||
|
-- y is not available below
|
||||||
|
willFail(y)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Streams as literals
|
||||||
|
|
||||||
|
Stream is a special data structure that allows many writes. It has [a dedicated article](crdt-streams.md).
|
||||||
|
|
||||||
|
To use a stream, you need to initiate it at first:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Initiate an (empty) appendable collection of strings
|
||||||
|
resp: *string
|
||||||
|
|
||||||
|
-- Write strings to resp in parallel
|
||||||
|
resp <- foo()
|
||||||
|
par resp <- bar()
|
||||||
|
|
||||||
|
for x <- xs:
|
||||||
|
-- Write to a stream that's defined above
|
||||||
|
resp <- baz()
|
||||||
|
|
||||||
|
try:
|
||||||
|
resp <- baz()
|
||||||
|
otherwise:
|
||||||
|
on "other peer":
|
||||||
|
resp <- baz()
|
||||||
|
|
||||||
|
-- Now resp can be used in place of arrays and optional values
|
||||||
|
-- assume fn: []string -> ()
|
||||||
|
fn(resp)
|
||||||
|
|
||||||
|
-- Can call fn with empty stream: you can use it
|
||||||
|
-- to construct empty values of any collection types
|
||||||
|
nilString: *string
|
||||||
|
fn(nilString)
|
||||||
|
```
|
||||||
|
|
||||||
|
One of the most frequently used patterns for streams is [Conditional return](flow/conditional.md#conditional-return).
|
||||||
|
|
||||||
|
You can create a stream with [Collection creation](variables.md#collections) operators.
|
106
libraries/README.md
Normal file
106
libraries/README.md
Normal file
@ -0,0 +1,106 @@
|
|||||||
|
# Libraries
|
||||||
|
|
||||||
|
`import` declaration allows to use functions, services and data types defined in another module on the file system. While it's a very simple mechanic, together with NPM flexibility it enables full-blown package management in Aqua.
|
||||||
|
|
||||||
|
## Available Aqua Libraries
|
||||||
|
|
||||||
|
* Builtin services API: [@fluencelabs/aqua-lib](aqua-lib.md) (see on [NPM](https://www.npmjs.com/package/@fluencelabs/aqua-lib))
|
||||||
|
* PubSub & DHT: [@fluencelabs/aqua-dht](registry.md) (see on [NPM](https://www.npmjs.com/package/@fluencelabs/aqua-dht))
|
||||||
|
* IPFS API: [@fluencelabs/aqua-ipfs](aqua-ipfs.md) (see on [NPM](https://www.npmjs.com/package/@fluencelabs/aqua-ipfs))
|
||||||
|
|
||||||
|
## How To Use Aqua Libraries
|
||||||
|
|
||||||
|
To use a library, you need to download it by adding the library to `dependencies` in `package.json` and then running `npm install`.
|
||||||
|
|
||||||
|
{% hint style="info" %}
|
||||||
|
If you're not familiar with NPM, `package.json` is a project definition file. You can specify `dependencies` to be downloaded from [npmjs.org](https://npmjs.org) and define custom commands (`scripts`), which can be executed from the command line, e.g. `npm run compile-aqua.`
|
||||||
|
|
||||||
|
To create an NPM project, run `npm init`in a directory of your choice and follow the instructions.
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
Here's an example of adding `aqua-lib` to `package.json` dependencies:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"name": "my-awesome-project",
|
||||||
|
"version": "0.0.1",
|
||||||
|
"dependencies": {
|
||||||
|
"@fluencelabs/aqua-lib": "0.1.10"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
After running `npm i`, you will have `@fluencelabs/aqua-lib` in `node_modules` 
|
||||||
|
|
||||||
|
### In Aqua
|
||||||
|
|
||||||
|
After the library is downloaded, you can import it in your `.aqua` script as documented in [Imports And Exports](../language/header/):
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import "@fluencelabs/aqua-lib/builtin.aqua"
|
||||||
|
```
|
||||||
|
|
||||||
|
Check out corresponding subpages for the API of available libraries.
|
||||||
|
|
||||||
|
### In TypeScript and JavaScript
|
||||||
|
|
||||||
|
To execute Aqua functions, you need to be connected to the Fluence network. The easiest way is to add JS SDK to `dependencies`:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
"dependencies": {
|
||||||
|
"@fluencelabs/registry": "0.3.2",
|
||||||
|
"@fluencelabs/fluence": "0.21.5",
|
||||||
|
"@fluencelabs/fluence-network-environment": "1.0.13"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
After executing `npm install`, the Aqua API is ready to use. Now you need to export `registry` functions to TypeScript, that's easy. Create a file `export.aqua`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import createRouteAndRegisterBlocking, resolveRoute from "@fluencelabs/registry/routing.aqua"
|
||||||
|
|
||||||
|
export createRouteAndRegisterBlocking, resolveRoute
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, install Aqua compiler and compile your Aqua code to TypeScript: 
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install --save-dev @fluencelabs/aqua
|
||||||
|
aqua -i . -o src/generated/
|
||||||
|
```
|
||||||
|
|
||||||
|
That's it. Now let's call some functions on the Registry service:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { Fluence } from "@fluencelabs/fluence";
|
||||||
|
import { krasnodar } from "@fluencelabs/fluence-network-environment";
|
||||||
|
import { createRouteAndRegisterBlocking, resolveRoute } from "./generated/export";
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
// connect to the Fluence network
|
||||||
|
await Fluence.start({ connectTo: krasnodar[1] });
|
||||||
|
|
||||||
|
let label = "myLabel";
|
||||||
|
let value = "put anything useful here";
|
||||||
|
let serviceId = "Foo";
|
||||||
|
let ack = 5;
|
||||||
|
|
||||||
|
// create route and register for it
|
||||||
|
let relay = Fluence.getStatus().relayPeerId;
|
||||||
|
let route_id = await createRouteAndRegisterBlocking(
|
||||||
|
label, value, relay, serviceId,
|
||||||
|
(s) => console.log(`node ${s} saved the record`),
|
||||||
|
ack
|
||||||
|
);
|
||||||
|
|
||||||
|
// this will contain peer as route provider
|
||||||
|
let providers = await resolveRoute(route_id, ack);
|
||||||
|
}
|
||||||
|
|
||||||
|
main().then(() => process.exit(0))
|
||||||
|
.catch(error => {
|
||||||
|
console.error(error);
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
|
|
||||||
|
```
|
159
libraries/aqua-ipfs.md
Normal file
159
libraries/aqua-ipfs.md
Normal file
@ -0,0 +1,159 @@
|
|||||||
|
---
|
||||||
|
description: IPFS API bindings in Aqua
|
||||||
|
---
|
||||||
|
|
||||||
|
# @fluencelabs/aqua-ipfs
|
||||||
|
|
||||||
|
`aqua-ipfs` lets you call the API of an IPFS daemon, e.g. to transfer files between peers & services or to orchestrate IPFS nodes.
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
service Ipfs("ipfs-adapter"):
|
||||||
|
-- upload file on 'file_path' to associated IPFS node
|
||||||
|
put(file_path: string) -> IpfsPutResult
|
||||||
|
-- 'ipfs get', download file $cid from the associated IPFS node
|
||||||
|
get(cid: string) -> IpfsGetResult
|
||||||
|
-- download file $cid from $external_multiaddr to local filesystem
|
||||||
|
get_from(cid: string, external_multiaddr: string) -> IpfsGetResult
|
||||||
|
-- `ipfs swarm connect`, connect associated IPFS node to $multiaddr
|
||||||
|
connect(multiaddr: string) -> IpfsResult
|
||||||
|
-- address on which IPFS RPC is available (usually port 5001)
|
||||||
|
get_external_api_multiaddr() -> IpfsMultiaddrResult
|
||||||
|
-- address on which IPFS SWARM is available (usually port 4001)
|
||||||
|
get_external_swarm_multiaddr() -> IpfsMultiaddrResult
|
||||||
|
|
||||||
|
-- the following is internal API that isn't of much interest
|
||||||
|
get_local_api_multiaddr() -> IpfsMultiaddrResult
|
||||||
|
set_external_api_multiaddr(multiaddr: string) -> IpfsResult
|
||||||
|
set_external_swarm_multiaddr(multiaddr: string) -> IpfsResult
|
||||||
|
set_local_api_multiaddr(multiaddr: string) -> IpfsResult
|
||||||
|
set_timeout(timeout_sec: u64)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Terminology
|
||||||
|
|
||||||
|
* **@fluencelabs/aqua-ipfs** - Aqua library on NPM. Provide IPFS API to develop custom Aqua scripts. Suits more advanced use-cases.
|
||||||
|
* **ipfs-adapter** - WebAssembly service. Predeployed to all Fluence nodes, but it's possible to deploy your own if you need it.
|
||||||
|
* **particle file vault** - a special temporary directory that is shared between services participating in an Aqua script execution. That directory is local to each peer (i.e., it's not shared between services on different peers). It's accessible inside services by path `/tmp/vault/$particle-id`.
|
||||||
|
* **particle** - signed network packets that contain script and data. Each script execution produces a single particle with unique `particle id`
|
||||||
|
* **associated IPFS node** - IPFS daemon that is distributed with each Fluence node. See [_Predeployed ipfs-adapter_](aqua-ipfs.md#predeployed-ipfs-adapter) for more details
|
||||||
|
|
||||||
|
## Concepts
|
||||||
|
|
||||||
|
### Where Files Are Located
|
||||||
|
|
||||||
|
On the disk, in the **particle file vault** directory: `/tmp/vault/$particle-id`.
|
||||||
|
|
||||||
|
### How Files Are Shared
|
||||||
|
|
||||||
|
When a node downloads a file via `Ipfs.get_from`, the file is stored at the **particle file vault** and the `path` is returned. Other services can read or write to that `path` if there's a command to do so in the Aqua script. That is possible because **particle file vault** is shared between all services in the context of script execution.
|
||||||
|
|
||||||
|
In effect, it's possible to share files between different services. In order to prevent data leakage, these files are accessible only by services used in the script.
|
||||||
|
|
||||||
|
So, to share a file, the service puts it in the **particle file vault** and returns a path relative to the vault. 
|
||||||
|
|
||||||
|
## How To Use
|
||||||
|
|
||||||
|
### Process File From IPFS
|
||||||
|
|
||||||
|
Applications often need to apply different kinds of processing to files. Resize an image, extract JSON data, parse, compress, etc. 
|
||||||
|
|
||||||
|
To achieve that, you'll need to write a simple Aqua script that would download a file from IPFS, and give the resulting path to a service that implements desired processing logic.
|
||||||
|
|
||||||
|
Here's a simple example of calculating the size of the file specified by IPFS CID:
|
||||||
|
|
||||||
|
{% hint style="success" %}
|
||||||
|
Take a look at`ProcessFiles` service [Rust implementation](https://github.com/fluencelabs/examples/blob/2f4679ad01ca64f2863a55389df034120d65d131/intro/4-ipfs-code-execution/service/src/main.rs#L34) and its [Aqua API](https://github.com/fluencelabs/examples/blob/2f4679ad01ca64f2863a55389df034120d65d131/intro/4-ipfs-code-execution/aqua/src/process\_files.aqua)
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- define type aliases for code readability
|
||||||
|
type CID: string
|
||||||
|
type PeerId: string
|
||||||
|
type Multiaddr: string
|
||||||
|
type File: string
|
||||||
|
|
||||||
|
-- service that implements size calculation
|
||||||
|
service ProcessFiles:
|
||||||
|
file_size(file_path: string) -> u64
|
||||||
|
write_file_size(size: u32) -> File
|
||||||
|
|
||||||
|
-- download file & calculate its size
|
||||||
|
func get_file_size(cid: CID, remote_ipfs: Multiaddr) -> u32:
|
||||||
|
result <- Ipfs.get_from(cid, remote_ipfs)
|
||||||
|
size <- ProcessFiles.file_size(get.path)
|
||||||
|
<- size
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
{% hint style="info" %}
|
||||||
|
This is a simplified code that doesn't handle errors or topology. 
|
||||||
|
|
||||||
|
For the full example, take a look at [process.aqua](https://github.com/fluencelabs/examples/blob/2f4679ad01ca64f2863a55389df034120d65d131/intro/4-ipfs-code-execution/aqua/src/process.aqua#L44-L73).
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
### Upload Files To IPFS
|
||||||
|
|
||||||
|
To upload a file to the _associated_ IPFS node, use `Ipfs.put`. It reads the file from the **particle file vault** and uploads it to the _associated_ IPFS node.
|
||||||
|
|
||||||
|
Let's take a look at the example.
|
||||||
|
|
||||||
|
Using `get_file_size` function from the previous example it's possible to calculate the size of a file. But now let's upload the size to IPFS, so anyone can download it.
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- store calculated size on IPFS
|
||||||
|
func store_file_size(cid: CID, remote_ipfs: Multiaddr) -> CID:
|
||||||
|
size <- get_file_size(cid, remote_ipfs)
|
||||||
|
file <- ProcessFiles.write_file_size(size)
|
||||||
|
put <- Ipfs.put(file)
|
||||||
|
<- put.hash
|
||||||
|
```
|
||||||
|
|
||||||
|
### Get The Address Of The Associated IPFS Node
|
||||||
|
|
||||||
|
To download something from the associated IPFS node, you need to know its multiaddress. 
|
||||||
|
|
||||||
|
Use `Ipfs.get_external_api_multiaddr` function to achieve that.
|
||||||
|
|
||||||
|
For example, after the processed file is uploaded to IPFS, you might want to download it in the browser. For that, in TS code you would do something like this:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { Multiaddr } from 'multiaddr';
|
||||||
|
import { krasnodar } from '@fluencelabs/fluence-network-environment';
|
||||||
|
const { create } = require('ipfs-http-client');
|
||||||
|
|
||||||
|
// retrieve RPC multiaddr
|
||||||
|
let maddr = get_external_api_multiaddr(krasnodar[1]);
|
||||||
|
// remove /p2p/123D... from multiaddr
|
||||||
|
let rpcMaddr = new Multiaddr(rpcAddr).decapsulateCode(protocols.names.p2p.code);
|
||||||
|
// connect ipfs-js to the RPC multiaddr
|
||||||
|
const ipfs = create(rpcMaddr);
|
||||||
|
// download file via ipfs-js
|
||||||
|
let file = await ipfs.get(cid);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Fluence And IPFS
|
||||||
|
|
||||||
|
### Pre-Deployed IPFS-Adapter
|
||||||
|
|
||||||
|
Each Fluence node comes with an associated IPFS daemon to handle file transfers and caching. When a Fluence node starts, an instance of `ipfs-adapter` service is created and connected to the associated IPFS daemon. 
|
||||||
|
|
||||||
|
In effect, each Fluence node provides a WebAssembly service with an id `"ipfs-adapter"` that you can use to orchestrate file transfers between apps, download .wasm modules and deploy services from them. In that sense, Fluence provides a compute layer on top of IPFS, while IPFS takes care of file storage.
|
||||||
|
|
||||||
|
When you're using the `@fluencelabs/aqua-ipfs` library, it connects to the `"ipfs-adapter"` service by default. 
|
||||||
|
|
||||||
|
{% hint style="success" %}
|
||||||
|
It is possible to create a custom setup of the `ipfs-adapter` service or to associate it with an external IPFS node. Please contact us in [Discord](https://discord.gg/hDNdaBP45e) and we'll help you with that.
|
||||||
|
|
||||||
|
Alternatively, check out how IPFS is set up in our[ Dockerfile](https://github.com/fluencelabs/node-distro/blob/main/Dockerfile#L24-L26).
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
### How The Interaction With An IPFS Daemon Works
|
||||||
|
|
||||||
|
The Marine WASM runtime provides us with a secure yet powerful way to interact with external programs: run binaries on the host. That mechanic is called "mounted binaries".
|
||||||
|
|
||||||
|
`ipfs-adapter` "mounts" IPFS CLI utility internally, and uses it to interact with associated IPFS daemon. That is: when you call `Ipfs.put(maddr)`, it's just `ipfs put $maddr` like you would do in the terminal, and when you call `Ipfs.connect` it's just `ipfs swarm connect`.
|
||||||
|
|
||||||
|
That makes it very easy to express any existing IPFS API in Aqua. So if you miss some IPFS methods, please let us know!
|
||||||
|
|
74
libraries/aqua-lib.md
Normal file
74
libraries/aqua-lib.md
Normal file
@ -0,0 +1,74 @@
|
|||||||
|
---
|
||||||
|
description: API of protocol-level service (a.k.a builtins)
|
||||||
|
---
|
||||||
|
|
||||||
|
# @fluencelabs/aqua-lib
|
||||||
|
|
||||||
|
## Releases
|
||||||
|
|
||||||
|
You can find the latest releases of `aqua-lib` [on NPM](https://www.npmjs.com/package/@fluencelabs/aqua-lib) and changelogs are [on GitHub](https://github.com/fluencelabs/aqua-lib/releases)
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
The most up-to-date documentation of the API is in the code, please [check it out on GitHub](https://github.com/fluencelabs/aqua-lib/blob/main/builtin.aqua)
|
||||||
|
|
||||||
|
### Services
|
||||||
|
|
||||||
|
`aqua-lib` defines a number of services available on peers in the Fluence Network:
|
||||||
|
|
||||||
|
* `Op` - short for "Operations". Functions for data transformation.
|
||||||
|
* `Peer` - functions affecting peer's internal state
|
||||||
|
* `Kademlia` - functions to manipulate libp2p Kademlia
|
||||||
|
* `Srv` - short for "Service". Functions for service manipulation
|
||||||
|
* `Dist` - short for "Distribution". Functions for module and blueprint distribution
|
||||||
|
* `Script` - functions to run and remove scheduled (recurring) scripts
|
||||||
|
|
||||||
|
## How to use it
|
||||||
|
|
||||||
|
### In Aqua
|
||||||
|
|
||||||
|
Add `@fluencelabs/aqua-lib` to your dependencies as described in [Libraries doc](./), and then import it in your Aqua script:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
import "@fluencelabs/aqua-lib"
|
||||||
|
|
||||||
|
-- gather Peer.identify from all nodes in the neighbourhood
|
||||||
|
func getPeersInfo() -> []Info:
|
||||||
|
infos: *Info
|
||||||
|
nodes <- Kademlia.neighborhood(%init_peer_id%, nil, nil)
|
||||||
|
for node in nodes:
|
||||||
|
on node:
|
||||||
|
infos <- Peer.identify()
|
||||||
|
<- infos
|
||||||
|
```
|
||||||
|
|
||||||
|
### In TypeScript
|
||||||
|
|
||||||
|
`aqua-lib` is meant to be used to write Aqua scripts, and since`aqua-lib` doesn't export any top-level functions, it's not callable directly in the TypeScript. 
|
||||||
|
|
||||||
|
## Patterns
|
||||||
|
|
||||||
|
### Functions With A Variable Number Of Arguments
|
||||||
|
|
||||||
|
Currently, Aqua doesn't allow to define functions with a variable number of arguments. And that limits `aqua-lib` API. But there's a way around that.
|
||||||
|
|
||||||
|
Let's take `Op.concat_strings` as an example. You can use it to concatenate several strings. `aqua-lib` provides the following signature:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
concat_strings(a: string, b: string) -> string
|
||||||
|
```
|
||||||
|
|
||||||
|
Sometimes that is enough, but sometimes you need to concatenate more than 2 strings at a time. Happily, under the hood `concat_strings` accepts any number of arguments, so you can redeclare it with the number of arguments that you want:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
service MyOp("op"):
|
||||||
|
concat_strings(a: string, b: string, c: string, d: string) -> string
|
||||||
|
```
|
||||||
|
|
||||||
|
#### List of operations with a variable number of arguments
|
||||||
|
|
||||||
|
Here's a full list of other Op-s that you can apply this pattern to
|
||||||
|
|
||||||
|
* `Op.concat` - can concatenate any number of arrays
|
||||||
|
* `Op.array` - wraps any number of arguments into an array
|
||||||
|
* `Op.concat_string` - concatenates any number of strings
|
431
libraries/registry.md
Normal file
431
libraries/registry.md
Normal file
@ -0,0 +1,431 @@
|
|||||||
|
---
|
||||||
|
description: Aqua implementation of Fluence Registry and ResourcesAPI
|
||||||
|
---
|
||||||
|
|
||||||
|
# @fluencelabs/registry
|
||||||
|
|
||||||
|
Fluence Registry is an essential part of the Fluence network protocol. It provides a Resources API that can be used for service advertisement and discovery. For more details check out our [community call](https://youtu.be/Md0\_Ny\_5\_1o?t=770).
|
||||||
|
|
||||||
|
## Releases
|
||||||
|
|
||||||
|
You can find the latest `registry` release [on NPM](https://www.npmjs.com/package/@fluencelabs/registry) and the changelogs are in the [GitHub](https://github.com/fluencelabs/aqua-dht/releases) repo.
|
||||||
|
|
||||||
|
## API
|
||||||
|
|
||||||
|
For the API implementation, take a look at [resources-api.aqua](https://github.com/fluencelabs/registry/blob/main/aqua/resources-api.aqua) in the `registry` repo.
|
||||||
|
|
||||||
|
## Terminology
|
||||||
|
|
||||||
|
* **Registry** - a service that provides low-level API. `resources-api.aqua` is built on top of it.
|
||||||
|
* **@fluencelabs/registry** - an Aqua library on NPM that provides high-level and low-level APIs to develop custom registry scripts.
|
||||||
|
* **Resource/Provider** - a pattern for peer/service advertisement and discovery. **Providers** register for a **Resource** that can be discovered by its **resource\_id**.
|
||||||
|
* **Kademlia** - an algorithm for structuring a peer-to-peer network so that peers can find each other efficiently, i.e., in no more than O(logN) hops where N is the total number of peers in the network.
|
||||||
|
* **Resource** – a `string` label with associated `owner_peer_id` and a list of providers. A resource should be understood as a group of services or a group of peers united by some common feature. In low-level API **Resource** is a **Registry key.**
|
||||||
|
* **Registry Key** - a structure, signed by `owner_peer_id`, which holds information about Resource:
|
||||||
|
|
||||||
|
```
|
||||||
|
data Key:
|
||||||
|
id: string
|
||||||
|
label: string
|
||||||
|
owner_peer_id: string
|
||||||
|
timestamp_created: u64
|
||||||
|
challenge: []u8
|
||||||
|
challenge_type: string
|
||||||
|
signature: []u8
|
||||||
|
```
|
||||||
|
|
||||||
|
* **resource\_id** - a stable identifier created from the hash of `label` and `owner_peer_id` used to identify any resource. `id` field of Registry `Key`
|
||||||
|
* **Resource owner** - the `owner_peer_id` that created the resource. Other users can create resources with the same label but the identifier will be different because of the `owner_peer_id.`
|
||||||
|
* **challenge/challenge\_type** – dummy fields which will be used for permission management in the next Registry iterations.
|
||||||
|
* **Provider** – a peer which is registered as a resource provider, optionally with an associated **relay\_id** and **service\_id**. Each provider is associated with a **Registry record.**
|
||||||
|
* **Registry record** - a structure, signed by `set_by` peer\_id, which holds information about Provider:
|
||||||
|
|
||||||
|
```
|
||||||
|
data Record:
|
||||||
|
key_id: string
|
||||||
|
value: string
|
||||||
|
peer_id: string
|
||||||
|
set_by: string
|
||||||
|
relay_id: []string
|
||||||
|
service_id: []string
|
||||||
|
timestamp_created: u64
|
||||||
|
solution: []u8
|
||||||
|
signature: []u8
|
||||||
|
```
|
||||||
|
|
||||||
|
* provider's **value** – any string which can be defined and used in accordance with protocol requirements.
|
||||||
|
* provider's **peer\_id** – peer id of provider of this resource.
|
||||||
|
* provider's **relay\_id** - optional, set if provider is available on the network through this relay. 
|
||||||
|
|
||||||
|
{% hint style="info" %}
|
||||||
|
When a **provider** doesn't have a publicly accessible IP address, e.g. the client peer is a browser, it connects to the network through a relay node. That means that _other_ peers only can connect to this **provider** through a relay. In that case,`registerProvider implicitly set relay_id to`**`HOST_PEER_ID`**. 
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
* **provider**'s **service\_id** - optional, id of the service of that provider.
|
||||||
|
* **solution** – dummy field, will be used for permission checking in the next Registry iterations.
|
||||||
|
* **provider limit** - a **resource** can have at most 32 providers. Each new provider added after the provider limit has been reached results in removing an old provider following the FIFO principle. Soon provider's prioritization will be handled by TrustGraph.
|
||||||
|
* **host provider's record** - a **Registry record** with `peer_id`of a node. When a node is registered as a provider via `registerNodeProvider` or `createResourceAndRegisterNodeProvider`, the **record** is a **host record**. Host records live through garbage collection, unlike other Registry **records**. [See Register As A Provider ](registry.md#register-as-a-provider)for details.
|
||||||
|
* **resource** and **provider lifetime** - a **resource** and **provider record** are **** republished every **** [1 hour](https://github.com/fluencelabs/registry/blob/main/service/src/defaults.rs#L23) and evicted, i.e. removed, [24 hours](https://github.com/fluencelabs/registry/blob/main/service/src/defaults.rs#L24) after being unused.
|
||||||
|
|
||||||
|
{% hint style="info" %}
|
||||||
|
So there are two types of providers. First is a node provider which lifetime controlled by this node. Second is a JS peer provider and should be renewed periodically by this peer.
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
* **script caller** - a peer that executes a script by sending it to the network. In Aqua it's `INIT_PEER_ID`
|
||||||
|
* **node** - usually a Fluence node hosted by the community or Fluence Team. Nodes are long-lived, can host WebAssembly services and participate in the Kademlia network.
|
||||||
|
|
||||||
|
## How To Use Registry
|
||||||
|
|
||||||
|
{% hint style="info" %}
|
||||||
|
There are [several simple examples](https://github.com/fluencelabs/registry#how-to-use) in the `fluencelabs/registry` repo. Give them a look.
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
### Create A Resource
|
||||||
|
|
||||||
|
Before registering as a provider is possible, resource must be created. That's exactly what `createResource` does. 
|
||||||
|
|
||||||
|
Here's a simple Aqua example:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
import "@fluencelabs/registry/resources-api.aqua"
|
||||||
|
import "@fluencelabs/aqua-lib/builtin.aqua"
|
||||||
|
|
||||||
|
func my_function(label: string) -> ?string, *string:
|
||||||
|
resource_id, errors <- createResource(resource)
|
||||||
|
if resource_id != nil:
|
||||||
|
-- resource created successfully
|
||||||
|
Op.noop()
|
||||||
|
else:
|
||||||
|
-- resource creation failed
|
||||||
|
Op.noop()
|
||||||
|
|
||||||
|
<- resource_id, errors
|
||||||
|
```
|
||||||
|
|
||||||
|
### Register As A Provider
|
||||||
|
|
||||||
|
There are four functions that register providers. Let's review them.
|
||||||
|
|
||||||
|
These you would use for most of your needs:
|
||||||
|
|
||||||
|
* `registerProvider` - registers`INIT_PEER_ID` as a provider for existent resource.
|
||||||
|
* `createResourceAndRegisterProvider` - creates a resource first and then registers `INIT_PEER_ID` as a provider for it.
|
||||||
|
|
||||||
|
And these are needed to register a node provider for a resource:
|
||||||
|
|
||||||
|
* `registerNodeProvider` - registers the given node as a provider for an existing resource.
|
||||||
|
* `createResourceAndRegisterNodeProvider` - creates a resource first and then registers the given node as a provider.
|
||||||
|
|
||||||
|
Now, let's review them in more detail.
|
||||||
|
|
||||||
|
#### `createResourceAndRegisterProvider` & `registerProvider`
|
||||||
|
|
||||||
|
These functions register the **caller** of a script as a provider:
|
||||||
|
|
||||||
|
* `createResourceAndRegisterProvider` creates a resource prior to registration
|
||||||
|
* `registerProvider` simply adds a registration as a provider for existing resource.
|
||||||
|
|
||||||
|
#### `createResourceAndRegisterNodeProvider` & `registerNodeProvider`
|
||||||
|
|
||||||
|
These two functions work almost the same as their non-`Node` counterparts, except that they register a **node** instead of a **caller**. This is useful when you want to register a **service** hosted on a **node**.
|
||||||
|
|
||||||
|
Records created by these two functions live through garbage collection unlike records created by `registerProvider.`
|
||||||
|
|
||||||
|
Here's how you could use it in TypeScript:
|
||||||
|
|
||||||
|
{% hint style="info" %}
|
||||||
|
You first need to have `export.aqua` file and compile it to TypeScript, see [here](./#in-typescript-and-javascript)
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import {Fluence, KeyPair} from "@fluencelabs/fluence";
|
||||||
|
import { krasnodar } from "@fluencelabs/fluence-network-environment";
|
||||||
|
import {registerNodeProvider, createResource, registerProvider, resolveProviders} from "./generated/export";
|
||||||
|
import assert from "assert";
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
// create the first peer and connect it to the network
|
||||||
|
await Fluence.start({ connectTo: krasnodar[1] });
|
||||||
|
console.log(
|
||||||
|
"📗 created a fluence peer %s with relay %s",
|
||||||
|
Fluence.getStatus().peerId,
|
||||||
|
Fluence.getStatus().relayPeerId
|
||||||
|
);
|
||||||
|
|
||||||
|
let label = "myLabel";
|
||||||
|
console.log("Will create resource with label:", label);
|
||||||
|
let [resource_id, create_error] = await createResource(label);
|
||||||
|
assert(resource_id !== null, create_error.toString());
|
||||||
|
console.log("resource %s created successfully", resource_id);
|
||||||
|
|
||||||
|
let value = "myValue";
|
||||||
|
let node_provider = krasnodar[4].peerId;
|
||||||
|
let service_id = "identity";
|
||||||
|
let [node_success, reg_node_error] = await registerNodeProvider(node_provider, resource_id, value, service_id);
|
||||||
|
assert(node_success, reg_node_error.toString());
|
||||||
|
console.log("node %s registered as provider successfully", node_provider);
|
||||||
|
|
||||||
|
let [success, reg_error] = await registerProvider(resource_id, value, service_id);
|
||||||
|
console.log("peer %s registered as provider successfully", Fluence.getStatus().peerId);
|
||||||
|
assert(success, reg_error.toString());
|
||||||
|
|
||||||
|
let [providers, error] = await resolveProviders(resource_id, 2);
|
||||||
|
// as a result we will see two providers records
|
||||||
|
console.log("resource providers:", providers);
|
||||||
|
}
|
||||||
|
|
||||||
|
main().then(() => process.exit(0))
|
||||||
|
.catch(error => {
|
||||||
|
console.error(error);
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Renew Record Periodically
|
||||||
|
|
||||||
|
After a non-host record is created, it must be used at least once an hour to keep it from being marked **stale** and deleted. Also, peers must renew themselves at least once per 24 hours to prevent record **expiration** and deletion.
|
||||||
|
|
||||||
|
While this collection schedule may seem aggressive, it keeps the Registry up-to-date and performant as short-lived client-peers, such as browsers, can go offline at any time or periodically change their relay nodes.
|
||||||
|
|
||||||
|
### Call A Function On Resource Providers
|
||||||
|
|
||||||
|
#### `executeOnProviders`
|
||||||
|
|
||||||
|
`registry` provides a function to callback on every **Record** associated with a resource:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
func executeOnProviders(resource_id: string, ack: i16, call: Record -> ())
|
||||||
|
```
|
||||||
|
|
||||||
|
It reduces boilerplate when writing an Aqua script that calls a (common) function on each provider. For example:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
import "@fluencelabs/registry/resources-api.aqua"
|
||||||
|
|
||||||
|
-- API that every subscriber must adhere to
|
||||||
|
-- You can think of it as an application protocol
|
||||||
|
service ProviderAPI:
|
||||||
|
do_smth(value: string)
|
||||||
|
|
||||||
|
func call_provider(p: Record):
|
||||||
|
-- topological move to provider via relay
|
||||||
|
on p.peer_id via p.relay_id:
|
||||||
|
-- resolve service on a provider
|
||||||
|
ProviderAPI p.service_id
|
||||||
|
-- call function
|
||||||
|
ProviderApi.do_smth(p.value)
|
||||||
|
|
||||||
|
-- call ProviderApi.do_smth() on every provider
|
||||||
|
func call_everyone(resource_id: String, ack: i16):
|
||||||
|
executeOnProviders(resource_id, ack, call_provider)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Passing Data To Providers
|
||||||
|
|
||||||
|
Due to the limitations in callbacks, `executeOnProviders` doesn't allow us to send dynamic data to providers. However, this limitation is easily overcome by using a `for` loop:
|
||||||
|
|
||||||
|
Consider this Aqua code:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
import "@fluencelabs/registry/resources-api.aqua"
|
||||||
|
|
||||||
|
-- Application event
|
||||||
|
data Event:
|
||||||
|
value: string
|
||||||
|
|
||||||
|
-- API that every provider must adhere to
|
||||||
|
-- You can think of it as an application protocol
|
||||||
|
service ProviderAPI:
|
||||||
|
receive_event(event: Event)
|
||||||
|
|
||||||
|
func call_provider(p: Record, event: Event):
|
||||||
|
-- topological move to provider via relay
|
||||||
|
on p.peer_id via p.relay_id:
|
||||||
|
-- resolve service on a provider
|
||||||
|
ProviderAPI p.service_id
|
||||||
|
-- call function
|
||||||
|
ProviderAPI.receive_event(event)
|
||||||
|
|
||||||
|
-- send event to every provider
|
||||||
|
func send_everyone(resource_id: String, ack: i16, event: Event):
|
||||||
|
-- retrieve all providers of a resource
|
||||||
|
providers <- resolveProviders(resource_id, ack)
|
||||||
|
-- iterate through them
|
||||||
|
for p <- providers par:
|
||||||
|
call_provider(p, event)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Handling Function Calls
|
||||||
|
|
||||||
|
[Fluence JS SDK](https://github.com/fluencelabs/fluence-js) allows JS/TS peers to define their API through services and functions. 
|
||||||
|
|
||||||
|
Let's take the `ProviderApi` from the previous example and extend it a little:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
data Event:
|
||||||
|
value: string
|
||||||
|
|
||||||
|
service ProviderAPI:
|
||||||
|
-- receive an event
|
||||||
|
receive_event(event: Event)
|
||||||
|
-- do something and return data
|
||||||
|
do_something(value: string) -> u64
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Let's save this file to `provider_api.aqua` and compile it
|
||||||
|
|
||||||
|
```
|
||||||
|
aqua -i . -o src/generated
|
||||||
|
```
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { Fluence } from "@fluencelabs/fluence";
|
||||||
|
import { krasnodar } from "@fluencelabs/fluence-network-environment";
|
||||||
|
import { registerProviderAPI, ProviderAPIDef } from "./generated/provider_api"
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
await Fluence.start({ connectTo: krasnodar[2] });
|
||||||
|
|
||||||
|
let service_id = 'api';
|
||||||
|
let counter = 0;
|
||||||
|
|
||||||
|
await registerProviderAPI(service_id, {
|
||||||
|
receive_event: (event: any) => {
|
||||||
|
console.log("event received!", event);
|
||||||
|
},
|
||||||
|
do_something: (value: any) => {
|
||||||
|
counter += 1;
|
||||||
|
console.log("doing logging!", value, counter);
|
||||||
|
return counter;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
main().then(() => process.exit(0))
|
||||||
|
.catch(error => {
|
||||||
|
console.error(error);
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Overcoming The Record Limit
|
||||||
|
|
||||||
|
If your app requires more than 32 providers for a single resource, then it's time to think about a custom WebAssembly service that stores all these records. Basically a simple "records directory" service.
|
||||||
|
|
||||||
|
With such a service implemented and deployed, you can use `resources-api.aqua` to register that "records directory" service and host as provider. Depending on your app's architecture, you might want to have several instances of "records directory" service. 
|
||||||
|
|
||||||
|
The code to get all records from "directory" services might look something like this in Aqua:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
import "@fluencelabs/registry/resources-api.aqua"
|
||||||
|
|
||||||
|
service RecDir:
|
||||||
|
get_records(resource_id: string) -> []Record
|
||||||
|
|
||||||
|
func dir_subscribers(resource_id: String, ack: i16) -> [][]Record:
|
||||||
|
-- this stream will hold all records
|
||||||
|
allRecs: *[]Record
|
||||||
|
-- retrieve RecDir records from Registry
|
||||||
|
providers <- resolveProviders(resource_id, ack)
|
||||||
|
-- iterate through all RecDir services
|
||||||
|
for dir <- providers:
|
||||||
|
on dir.peer_id:
|
||||||
|
RecDir dir.service_id
|
||||||
|
-- get all records from RecDir and write to allRecs
|
||||||
|
allRecs <- SubDir.get_records(resource_id)
|
||||||
|
<- allRecs
|
||||||
|
```
|
||||||
|
|
||||||
|
## Concepts
|
||||||
|
|
||||||
|
### Kademlia Neighborhood
|
||||||
|
|
||||||
|
Fluence nodes participate in the Kademlia network. Kademlia organizes peers in such a way that given any key, you can find a set of peers that are "responsible" for that key. That set contains up to 20 nodes.
|
||||||
|
|
||||||
|
That set is called "neighborhood" or "K-closest nodes" (K=20). In Aqua, it is accessible in `aqua-lib` via the `Kademlia.neighbourhood` function.
|
||||||
|
|
||||||
|
The two most important properties of the Kademlia neighborhood are: \
|
||||||
|
1\) it exists for _any_ key \
|
||||||
|
2\) it is more or less stable
|
||||||
|
|
||||||
|
### Data Replication
|
||||||
|
|
||||||
|
#### On write
|
||||||
|
|
||||||
|
When a registration as a provider for a resource is done, it is written to the Kademlia neighborhood of that resource\_id. Here's a `registerProvider` implementation in Aqua:
|
||||||
|
|
||||||
|
```haskell
|
||||||
|
-- Register for a resource as provider
|
||||||
|
-- Note: resource must be already created
|
||||||
|
func registerProvider(resource_id: ResourceId, value: string, service_id: ?string) -> bool, *Error:
|
||||||
|
success: *bool
|
||||||
|
error: *Error
|
||||||
|
relay_id: ?string
|
||||||
|
relay_id <<- HOST_PEER_ID
|
||||||
|
|
||||||
|
t <- Peer.timestamp_sec()
|
||||||
|
|
||||||
|
on HOST_PEER_ID:
|
||||||
|
record_sig_result <- getRecordSignature(resource_id, value, relay_id, service_id, t)
|
||||||
|
|
||||||
|
if record_sig_result.success == false:
|
||||||
|
error <<- record_sig_result.error!
|
||||||
|
success <<- false
|
||||||
|
else:
|
||||||
|
record_signature = record_sig_result.signature!
|
||||||
|
-- find resource
|
||||||
|
key, error_get <- getResource(resource_id)
|
||||||
|
if key == nil:
|
||||||
|
appendErrors(error, error_get)
|
||||||
|
success <<- false
|
||||||
|
else:
|
||||||
|
successful: *bool
|
||||||
|
-- get neighbourhood for the resource_id
|
||||||
|
nodes <- getNeighbours(resource_id)
|
||||||
|
-- iterate through each node in the neighbourhood
|
||||||
|
for n <- nodes par:
|
||||||
|
error <<- n
|
||||||
|
on n:
|
||||||
|
try:
|
||||||
|
-- republish key/resource
|
||||||
|
republish_res <- republishKey(key!)
|
||||||
|
if republish_res.success == false:
|
||||||
|
error <<- republish_res.error
|
||||||
|
else:
|
||||||
|
-- register as a provider on each node in the neighbourhood
|
||||||
|
put_res <- putRecord(resource_id, value, relay_id, service_id, t, record_signature)
|
||||||
|
if put_res.success:
|
||||||
|
successful <<- true
|
||||||
|
else:
|
||||||
|
error <<- put_res.error
|
||||||
|
|
||||||
|
timeout: ?string
|
||||||
|
-- at least one successful write should be performed
|
||||||
|
join successful[0]
|
||||||
|
par timeout <- Peer.timeout(5000, "provider hasn't registered: timeout exceeded")
|
||||||
|
|
||||||
|
if timeout == nil:
|
||||||
|
success <<- true
|
||||||
|
else:
|
||||||
|
success <<- false
|
||||||
|
error <<- timeout!
|
||||||
|
|
||||||
|
<- success!, error
|
||||||
|
```
|
||||||
|
|
||||||
|
This ensures that data is replicated across several peers.
|
||||||
|
|
||||||
|
#### At rest
|
||||||
|
|
||||||
|
Resource Keys and Provider records are also replicated "at rest". That is, once per hour all **stale** keys and records are removed and replicated to all nodes in the neighborhood, once per day all **expired** keys and records are removed. **** 
|
||||||
|
|
||||||
|
This ensures that even if a neighborhood for a **resource\_id** has changed due to some peers go offline and others join the network, data will be replicated to all nodes in the neighborhood.
|
||||||
|
|
||||||
|
...
|
||||||
|
|
||||||
|
{% hint style="info" %}
|
||||||
|
For advanced users accustomed to Aqua scripts:
|
||||||
|
|
||||||
|
There's an implementation of "at rest" replication for Registry [on GitHub](https://github.com/fluencelabs/registry/blob/main/aqua/registry-scheduled-scripts.aqua)
|
||||||
|
{% endhint %}
|
Loading…
Reference in New Issue
Block a user